How to Avoid Data Pitfalls
Data. Everyone is collecting it—from small, independent schools to huge, public universities. As an advancement professional, you can expect to spend at least part of your day dealing with numbers or reporting in some capacity. Given that prevalence, the challenge, according to Search Engine Watch, “isn’t having the data or accessing it, but making sense of it all.”
As advancement professionals work toward that understanding, it’s important that everyone be on the same page. “Data management is a team sport,” says Carrie White, vice president of advancement services for the University of Cincinnati Foundation. “We need everyone to take responsibility for the data, to build up clear communication, and to break down the silos and share information across teams,” she says.
Even with that team approach, certain red flags and misunderstandings arise over and over again. Here, we tackle five of them.
Pitfall 1: Data Paralysis
People who work with data can sometimes become paralyzed by the sheer amount of information available. Take prospect research, for example. A development officer starts with a wealth screening to identify individuals to target. Then, starting at the top of the list, the officer begins contacting the prospects to set up meetings to talk about interest in the institution. Shaun Keister, vice chancellor of development and alumni relations at the University of California, Davis, says that’s not always what happens. He has seen development officers “become paralyzed by the data, unable to execute next steps,” he says. “They obsess over the numbers and finding the ‘best prospect out there.’” Such fundraisers find themselves digging deeper into the data and numbers, running report after report, instead of taking the data as a starting point and reaching out for human conversations, Keister explains. “The data is a guide, a place to start,” he says.
Jeff Liebermann echoes this idea. “Data is the starting point, not the final answer,” he says. As the assistant vice president of main campus development at the University of Iowa Center for Advancement, he too has witnessed development officers “get stuck in the analysis of it all.” They become so intent on crafting the perfect list that they spend all their time running internet searches on names and looking on Zillow and LinkedIn, he explains. “But often we don’t know if someone is going to be charitable to us until we go speak to them,” he says. “So just start at the top of that list and pick up the phone.”
According to Keister, since “advancement is both an art and a science,” the best remedy for data paralysis is to apply the art. Rather than focusing on perfecting the list (the science), Keister recommends getting in front of the individuals who have been identified and working to understand how they’re connected to your institution, what they’re passionate about, and how that passion can be matched to your institution’s priorities.
Pitfall 2: Striving for 100% Accuracy
It can be hard not to expect, or work toward, 100% data accuracy, but that expectation shouldn’t supersede starting the work in the first place. Jason Coolman, associate vice president of development and alumni relations at the University of Waterloo (vice president of advancement and external relations at Wilfrid Laurier University in Ontario starting in August 2019), believes that improving data accuracy by even 1 or 2 percentage points should be viewed as a win. “I once had a records team tell me that they would not input any business data unless they knew it was complete,” he says. “But my view would be that if the record is empty, that’s 100% inaccurate, and if we can add in only one piece of information, say the company where that individual works, now that record is still not complete, but it’s 25% accurate and you’ve improved on the 100% inaccuracy of before.” Coolman explains that his work in alumni relations is sometimes based on certain sectors of the workforce, so he doesn’t necessarily need to know a graduate’s title, for instance. As long as he knows what field of work that person is in, he can feel confident about including the person in his project or event. In this case, a 25% accurate record would suffice.
It’s just not realistic to expect 100% accuracy from your systems and reports—especially when you consider that “the normal human error rate is 5%,” says Mark Koenig, assistant vice president for advancement services, analytics, and digital strategy at the Oregon State University Foundation. “And since humans input a lot of the data into our systems, we can’t ever be at 100%,” he explains.
While you should always attempt to have the most complete and correct data possible, one way to prevent the halting of work when data are less than 100% accurate is to cultivate an open, trusting culture in the workplace. “If you are open with your colleagues about what data you have and how it can be used or received, that sets a baseline of trust,” says White. On the flip side, as in Coolman’s example, if you talk with your data or records team about the types of information needed and explain that a completely filled-in record isn’t always necessary, you can start to move forward with tracking and using the available information.
Pitfall 3: Data Without Context
One of the most common data pitfalls across all fields of work is lack of context. “There’s a lot of this magical thinking out there that if we just look at quantitative data, it will tell us what to do,” says Tricia Wang, co-founder of Sudden Compass, a firm that helps companies leverage data to provide a better experience for their customers. Viewing data without considering its context can lead to more harm than good. “Take the military, for example. They started scheduling drone attacks based on algorithms around SIM card usage and quickly discovered that more and more civilians were being killed,” says Wang. “If humans had been involved in the scheduling, they could have pointed out that the target areas were heavily populated by civilians, or any anthropologist could have told them that most people [in that part of the world] share SIM cards.”
Part of the problem, according to Wang, is that researchers are seen as order takers who don’t have a point of view. “The belief is that we’re just sitting there waiting for someone to ask us a question, [and] then we crunch the data and hand off a report,” she says. “The difficulty with this [perception] is that the researcher, the person most familiar with the data, never moves into a strategic position.” Too often the decision makers are three steps removed from the data and the people who understand the data. Much of the work at Sudden Compass focuses on moving clients away from what Wang and her colleagues call an “organizational-centric” model of data use.
Coolman describes a similar situation at the University of Waterloo. “In most systems, a development officer puts in a request for a report. The data team produces a batch of data, most likely in an Excel document, and then hands it back to the development officer, who is then expected to interpret it,”
The most obvious way to avoid the context pitfall is to have researchers in strategic positions or at least included in the decision-making process to help everyone involved better understand the data. “Having a researcher in the room from the get-go helps your team to be questions-led,” says Wang. “You can always ask ‘What’s happening here?,’ but eventually you need to move away from the ‘what’ and get into the ‘why,’ which is where the data or research expert comes in.”
White and her team are implementing a process that applies context to all of their data reports. A major part of this work is establishing a standard language so that everyone can interpret what reports mean by terms such as participation or productivity. On each of its reports, White’s team includes a link to a knowledge base that explains the logic behind the report and includes information such as its owner, when it was last updated, and its initial purpose. The goals are for users to have something to refer to that will clarify any questions they may have, and to make the report as consumable and useful as possible. “We work hard to be the gatekeepers of standardization,” says White. “If we’re changing reporting language or there’s a request for something new, we partner with our colleagues to make sure that we’re putting out our best information.”
It’s about adding that human, thinking element, says Coolman. Alumni associations, for instance, have a tendency to look at the numbers, see a large alumni contingency in one geographic region, and then put on broad programming events meant for that entire population, he pointed out. But focusing only on geography may result in an event so watered down that no one is sure who the intended audience is. “You can’t just take all alumni who live in New York City and invite them to the same big cocktail party,” he says. Instead, focus on one group—young alumni, for example—and give them programming that is specific to their needs and interests. In other words, apply a little context to the data. You’ll be much more likely to see higher engagement.
Pitfall 4: Using Different Metrics for Comparison
Benchmarking your institution against others can be helpful in terms of goal setting, but if you’re not using the same metrics that other institutions use, it becomes not only confusing but inaccurate. One of the most common areas in which this pitfall rears its ugly head is discussions of alumni participation. Participation is generally calculated by the number of alumni who donated to a campaign, attended an event, or volunteered with the institution, divided by the total number of alumni. However, different schools use different definitions of these factors, particularly the denominator, explains the University of Iowa’s Liebermann. “If you’ve got two schools of similar size, but one school uses all addressable alumni as the denominator and one school uses all alumni minus those who don’t want to be solicited or don’t have a usable address, you’re going to come out with two very different numbers when you do the calculations,” he says. And that ultimately means that the comparison won’t be accurate. “That’s one of the reasons that, especially at larger schools, you don’t see alumni participation rates being used as the be-all, end-all measuring stick,” Liebermann says.
At the Oregon State University Foundation, Koenig has shown that the best way to avoid this pitfall is to standardize the definitions around data. Beyond the definitions of commonly used terms, he says, decision makers need “a full understanding of the meaning of data points and reports.” Language needs to be standardized, Koenig points out, not just institutionwide but professionwide, to allow for peer-to-peer comparisons. As a member of CASE’s volunteer Alumni Engagement Metrics Task Force, Koenig helped to produce the Alumni Engagement Metrics white paper, which defines four areas of alumni involvement: volunteer engagement, experiential engagement, communication engagement, and philanthropic engagement.
Pitfall 5: Making Assumptions
While data should be used to set goals and prioritize tasks, making assumptions based on one data point alone can be dangerous. Liebermann tells his staff to be careful not to assume that a prospect will be charitable simply because the person has the means to be. And the opposite assumption can be just as detrimental to his team’s success: assuming that a person who doesn’t appear, on paper, to be affluent won’t be charitable at all.
At UC Davis, Keister saw this exact scenario play out (luckily, it was in his institution’s favor). One day, “out of the blue,” Keister recounted, the institution received a $5,000 gift from someone the development department had passed up for an initial wealth screening based on the individual’s small, unassuming home and single previous $25 gift to the institution. It turned out, however, that the donor owned a significant amount of real estate and, after working with a major gifts officer, recently made what Keister calls “a transformational gift to our university.” Keister summed up the learning from this unlikely success story: “We never would have gotten to that point if we had just stuck to the initial assumption.”
And it’s not just prospect research that assumptions can hurt. For alumni participation or engagement, for instance, it would be incorrect to assume that an event with 200 attendees was more successful than one with 50, says Liebermann. “Who were those 200 people? Who were the 50?” he asks, pointing out that perhaps the alumni office received 50 RSVP cards declining the smaller event but asking to remain on the list for future invitations. Because events are time- and place-dependent, it can be tricky to gauge their effectiveness based on attendance alone. If you get responses from alumni, even declining to attend, that should count as engagement, Liebermann explains.
This pitfall can also occur when staff believe they know their donors or their alumni better than the data does. Coolman gives the example of asking a former dean of the school of engineering what alumni feel most connected to: the university writ large, the school of engineering, their department within the school of engineering, or their colleagues and classmates. The dean believed alumni alignment would be with the school of engineering, but that was last on the list when the alumni themselves were asked. Their first answer was their colleagues and classmates. This misunderstanding of loyalties can be detrimental to teams that are creating engagement strategies, crafting donor messages, or building campaign appeals.
The way to avoid making assumptions is not only to let the data be a starting point but also to use more than one data point in decision making. For example, to predict the likelihood that someone will make a gift, one needs to look at two or three data points together, says Coolman. “You do a wealth screening first, but then a second variable to consider is how engaged that person is with the institution,” he explains. Liebermann echoes this thought, adding that “as advancement officers, we know that the process is always evolving, there’s always something new we can learn. It’s about incorporating as much data and information as possible into your preparation and then reaching out to make that human contact.”
Photo credits: Emojoez; iStock; Getty Images; Anttohoho; Tomograf; E+; Bgblue; Digitalvision Vectors; Kolotuschenko.
About the author(s)
Caitlin Lukacs is the CASE manger of editorial content.
Article appears in:
Advancing to the top: How professionals from advancement fields found their way to top leadership roles. Plus, advancement professionals share how to avoid data pitfalls, and CASE celebrates 10 years of training the next generation of fundraisers.