It's an exciting time to join the development office as its new vice president. The university is gearing up for a campaign that will transform the campus with new academic programs, buildings, professorships, and aid for students. Every trustee has pledged leadership-level gifts. The $1 billion target is a stretch but doable because the last campaign brought in $800 million.
Only, the last campaign didn't actually raise that much.
The university reached its goal by counting the full value of bequest intentions of donors as young as 50, money it may not see for decades. It also counted an ongoing state grant, $14 million in hoped-for annual royalties from an alumnus's patent that has yet to take off, and a $50 million principal gift made five years prior to the campaign's start. Free software licensed to the campus? Its retail value was thrown in too.
None of those valuations comply with CASE's Reporting Standards & Management Guidelines, but many development leaders grouse that such liberal arithmetic is becoming more common. No one CURRENTS contacted for this story wanted to charge specific institutions with promoting bogus fundraising totals, but everyone wanted to talk about what they feel is a scourge on the profession. They cited the above examples as practices that enable institutions to claim they're breaking fundraising records. Such practices pressure other institutions to keep up, detract from the mission of educational institutions, undermine the promise and joy of philanthropy, and damage the profession's reputation.
A development vice president (who will remain anonymous) has seen this questionable accounting firsthand. He discovered that a company was paying the institution $1.5 million annually to be the exclusive soft drink provider on campus—income that development had counted as a gift. Among other irregularities, the tax dollars that support the public radio station on campus found their way into the ledger's philanthropic column. He had to "uncook the books" and report to an angry board of trustees that the university needed to revise its fundraising totals going back 10 years.
"Facing the truth was the only curative for what was going on," the VP says, noting that the public fallout was minimal. But if these practices become widespread, he says, they will damage the profession.
"The reality is you're standing on sand," he says, adding that he knows a new VP whose job is on the line after reporting the questionable accounting of her predecessor. "You know what trustees don't want to hear? Lowered ambitions. We fight quite successfully against this, but there is some sense that there's snake oil involved in fundraising."
Why would an institution deviate from professional counting standards? Reasons cited ranged from campaign
consultants advising institutions that "it's OK, everyone's doing it" to a lack of understanding of the guidelines due to turnover. Most sources blame ambitious presidents for setting unrealistic goals, pressuring development to deliver by any means. "It's the arms race. I've got to announce a bigger number than the people down the street," says John Taylor, a consultant and former associate vice chancellor for advancement services at North Carolina State University.
"Institutional envy is a big part of the issue, and philanthropy is smack dab in the middle," says Dexter Bailey, senior vice president for university advancement at Stony Brook University in New York. Leaders inappropriately measure their institutions by the fundraising prowess of universities like Harvard, Oxford, and Stanford. Plus, each campaign goal needs to be higher than the last one, regardless of economic realities. Lower fundraising targets relative to a peer institution's can signal a reputation-damaging lack of ambition, industry leaders say.
"Development is a tough business, full of false expectations," says James Langley, former vice president for advancement at Georgetown University and now president and CEO of the fundraising consultancy Langley Innovations. "A president believes something is possible when it isn't, board members demand that certain results be achieved without having analytics to justify [the goal], and therefore, in responding to pressure, fundraisers try to make their efforts look as effective as possible."
Leaders hold skewed perspectives about fundraising prospects, Langley says. He often asks board members how many million-dollar gifts are given each year to U.S. higher educational institutions. "I hear wildly inflated numbers: 1,000, 2,000, sometimes tens of thousands," he says.
The real answer? There were 513 million-dollar gifts from individuals and family foundations in 2014, according to a Marts & Lundy study.
"If you believe there are thousands of million-dollar gifts available, you're puzzled why your institution isn't securing a significant portion of them," Langley says. "It's false expectations, but it's often rooted in a lack of knowledge."
Bailey can attest to the importance of managing expectations through education. When he was the advancement vice president at Worcester Polytechnic Institute in Massachusetts, the institution was planning a $150 million campaign per a feasibility study, but trustees wanted a billion-dollar campaign like a competitor was conducting.
"For four meetings, our board debated why we weren't doing a billion-dollar campaign. After building trust, I said to them, ‘I'm happy to do a billion-dollar campaign. As soon as we raise $100 million, we can raise the goal to $200 million. We're going to work as hard as we can,'" Bailey says. "They wouldn't pass a resolution of $200 million until I proved to them that it was a stretch goal.
"The board didn't understand until I basically opened the hood of the car and showed them the details behind all prospect ratings [and other data]. I was working with engineers, so they loved the numbers, the analysis, the projections. That group was receptive to data driving decisions." Bailey says that WPI officials were ecstatic when the campaign closed at $248 million. "But," he adds, "the initial and emotional reaction was that [the goal was] too low."
When he accepted his position at Stony Brook, Bailey secured assurances that the university's next campaign goal would be based on a feasibility analysis. He is now leading a $600 million campaign, and the president and board, he says, are realistic and practical. But he acknowledges a self-imposed pressure: "There's a part of me where I'm like, ‘What can we do to get to $1 billion?' If you ask me what's my real aspiration for Stony Brook, it's that we will do a billion dollars."
Count what you want
Much of the tension over reaching big goals would be alleviated, several industry leaders say, if institutions could count what they want—as long as they're transparent about it. Many already do so. Even if they're following guidelines for benchmarking surveys, they go public with a larger number that takes a more expansive view of private support.
"There's not a single institution that I've consulted for that 100 percent follows CASE guidelines," says Taylor, a former CASE vice president who helped write the guidelines but disagrees with provisions on how to count some gifts. For instance, he says, gifts made prior to a campaign's start should be grandfathered in because donors may need to make gifts early for tax reasons. He's fine with institutions counting what they want, if they "tell the public in advance what's going to count and why. To change the rules in the middle of the campaign is inappropriate."
"CASE takes these competing priorities very seriously," says Matthew Eynon, chair of CASE's Commission on Philanthropy, the volunteer group that reviews and updates the Reporting Standards & Management Guidelines. "How do we balance the need for consistent reporting of gift totals—which are necessary for equitable comparisons—with the flexibility to recognize an institution's unique culture or strategic goals? It's a perennial issue for our profession and for CASE."
Eynon, who also serves on CASE's Board of Trustees and is the vice president for college advancement at Franklin & Marshall College, adds that "what the standards are, how they should be applied, and how we might monitor them continue to be ongoing topics of discussion for the Philanthropy Commission."
As for variances in how institutions count gifts, Chris Cox, vice-principal of philanthropy and advancement at the University of Edinburgh in Scotland, has mixed feelings. "Each institution can find good reason for counting what they count," says Cox, noting that institutions might count all private sources of income, not just gifts. Touting private investments "can send a powerful message that presumes U.K. higher education is publicly funded" and can help institutions leverage additional resources.
But the public will inevitably compare institutions that report purely philanthropic totals to ones that don't. "Sometimes it's not even comparing apples and pears—it's comparing apples and gorillas. That's the danger point," Cox says.
Development leaders at the University of Sydney agree. "The public is comparing institution A with institution B," says Rosalind Ogilvie, advancement director of the Australian institution. "There's a case for being consistent. We have a responsibility as a sector to look beyond the walls of our universities."
People don't read the fine print, Sydney's vice-principal of advancement Tim Dolan adds, and such variant counting will hurt the industry's credibility. "If there's one sector where you need the trust of your constituents," he says, "it's philanthropy."
Plus, donors learn from their giving experiences. "The donor to someone else's campaign may be married to one of my donors," says Scott Nichols, senior vice president for development and alumni relations at Boston University. If that other institution is "defining a campaign gift as anything they can rationalize," then the donor to BU may expect the Massachusetts institution to follow suit.
Like Cox, Darrow Zeidenstein, vice president for development and alumni relations at Rice University in Texas, notes that a one-size-fits-all model is unrealistic because institutions have different strategic aims. Still, institutions cannot simply count whatever they want; such a fundraising drive loses credibility.
"You raise $1 billion on paper, but the institution hasn't moved forward in a meaningful or substantial way … no new capital projects, no scholarships. If people don't see a billion dollars of investment, then they turn cynical," he says. "When you tell donors their gift is going to make a difference and it doesn't, you've violated a special trust."
A Good Housekeeping seal of approval
There's no Deloitte for gift counting, but maybe there should be. Several development leaders say they would love a system in which institutions receive a verification certificate for accurate gift reporting. But it's tricky.
"Anytime that you use data for competitive or comparative purposes, you need an independent review," Langley says. "Who wants to take that on? Take CASE. CASE is a membership organization. Is it going to offend members by questioning [gift totals]? Who, then, would take that on?"
CASE surveys institutions each year about their campaigns, and Taylor recommends follow-up: "Ask institutions that took the survey, exactly how much did you raise? It's never the same amount as announced in public."
In the U.K., Cox has wondered about a CASE endorsement emblem to certify institutional fundraising reports as accurate, like a Good Housekeeping seal of approval. But, he says, "it's going to be impossible to enforce a rigid set of rules. You've got to give independent, autonomous institutions some flexibility."
Cox questions whether institutions need extraordinary campaign goals to ignite donors' passions. "Everyone assumes that if you're not in a campaign, you're not serious," he says. "You could spend dozens of hours deciding what you're going to count when you could be out talking with donors."
Institutions can't all be like Harvard, Oxford, and Stanford, and development VPs must give leaders that reality check. "We do ourselves a disservice by always talking about our successes," Bailey says. "In my board meetings, I talk about our failures, challenges, and where we're not doing well. That's not something I'd hide from my board."
Nichols would like to see VPs held to high ethical standards. "There are real consequences to this," he says, and they collectively hurt the profession when people use questionable accounting.
To Langley, the consequences of focusing on some magical number cause institutions to lose sight of the real mission. Instead of concentrating on a decline in alumni participation or why recent graduates are less impressed with their alma maters than older graduates, they're touting grand fundraising figures to appear competitive. "The most insidious part is that it's causing us to ignore root issues that are undermining the credibility of higher education, including fundraising," Langley says. By focusing on inflated short-term results, he says, higher education may be endangering its future.
Layers of accountability
Institutions can engage in creative counting because no outside parties are watching. The data are self-reported, and the consequences are limited to public embarrassment—but only if institutions are caught. The California State University system offers lessons on how institutions can hold themselves accountable.
Its rigorous certification of fundraising totals starts on each campus. Both the chief development officer and chief financial officer, who has been trained to validate gift documentation, verify that gifts were reported according to Council for Aid to Education and CASE standards. "Having the CFO adds a layer of accountability, because they don't have a vested interest in making those numbers larger," says Lori Redfearn, assistant vice chancellor of system-wide advancement.
The campuses then report their totals to Redfearn's office, which compares data to be submitted to CAE's Voluntary Support of Education survey to prior years' totals. A 10 percent jump in either direction prompts a query for the institution to explain. In 2015, when a CSU campus reported doubling alumni participation, the advancement services office basically said: That's nice, now show us how you got there. The campus demonstrated how a challenge gift, among other things, inspired more alumni to give.
Campaign results are similarly analyzed. A key area for scrutiny: pledge write-offs, although Redfearn acknowledges that donors may not follow through on a multiyear pledge for a variety of reasons, including a change of heart. "If there are habitual pledge write-offs … we tell the campus president to examine whether those pledges are real," she says. If her office isn't satisfied with a development office's explanation for any other query, her staff also turns the matter over to the campus president.
Why the president? "Presidents are the caretakers of the reputation of the university," Redfearn says. "They need to work with their donors and gain their trust so they'll continue to invest in what's important to the university."
The systemwide review allows CSU to identify trends and make adjustments. After seeing a spike in bequest intentions, CSU limited the reporting of those pledged gifts. "We felt like the numbers were accurate," Redfearn says, "but reporting them was diminishing our credibility with internal audiences who were saying, ‘Where's this money to spend today?'"
The system also stopped counting software gifts, which implied that the university had more gift cash on hand than it really did. "We were setting expectations of fundraising in future years that were unattainable. We've made a decision that it isn't the best reflection of our philanthropic activities," she says.
The double review not only ensures accuracy but also promotes reasonable expectations-and eliminates the pressure that might tempt someone to cheat. "Goals and benchmarks are intended to help with process improvement. They aren't intended to be ‘you'll lose your job if you don't raise this amount of money,'" Redfearn says. "It's about having a culture where you have accountability, transparency, and reasonable expectations."—TC
Rankled by rankings
Internal pressure to raise money isn't the only reason institutions inflate their fundraising numbers. U.S. News & World Report factors alumni giving into its Best Colleges ranking, which increases the need to boost alumni participation rates.
"There's a strong incentive to rig the numbers so that the institution looks good. U.S. News is important for status seeking. It's a source of decision-making for students," says Darrow Zeidenstein, vice president for development and alumni relations at Rice University in Texas.
A major concern: Institutions "losing" contact with alumni. The standard is how many contactable alumni gave, so if you can't find them, they can't count against you. Or, a college says an alumnus gave when it really came from his foundation.
Development leaders say such practices are unfair. "The institutions counting correctly look bad because it seems [competing schools are] running circles around them," says Robert Burdenski, a Chicago-based annual giving consultant.
Zeidenstein sums up the ethical case for accurate counting: "If we agreed to submit data, we should follow their rules."
As with campaigns, no one audits annual giving data. The participation data U.S. Newsrequests is identical to what the Council for Aid to Education asks for in its Voluntary Support of Education survey, which tracks private giving to schools, colleges, and universities. CAE performs 700 internal checks of the data—searching for "things that cannot mathematically be so"—and scopes out institutions with artificially low alumni bases, says Ann Kaplan, director of the VSE Survey and Data Miner.
John Taylor, principal of John H. Taylor Consulting, compared undergraduate alumni participation rates of 15 institutions based on what they reported to CAE and to U.S. News.
"With only one exception, everyone reported to U.S. News a higher number than what they reported to CAE," he says.
Several development leaders say a third-party certification system would hold institutions accountable, but it would also legitimize alumni participation as a measure of alumni satisfaction.
Another option: Stop submitting numbers.
Some institutions boycott the rankings for philosophical reasons, Burdenski says, but there's a bottom-line motive for not obsessing over participation. "We're so hyper about participation, but the reality is we've gotten smarter at seeing who's wealthy" and likely to give, Burdenski says. "At some point, it's not the best use of institutional dollars to go after every $25 donor. We're spending $100 to get $1 from a donor."—TC
About the author(s)
Toni Coleman is interim editor in chief of Currents magazine at CASE.