Today, May 15, is GiveBig, Seattle’s third annual “day of giving” event. Created by the Seattle Foundation in 2011, the idea is to focus attention on charitable giving, raise the public profile of the Seattle Foundation and of course raise some dough. There are similar events in many other cities now, and even a national “GivingTuesday” event right after Thanksgiving.
But how do we know whether GiveBig and similar day of giving type events are really working?
Seattle Foundation proudly trumpets the total number of dollars raised, which showed impressive growth from 2011 to 2012, and I’m sure will notch another nice gain this year. And of course, there are a nice slew of media hits to show for it, and I’m sure the Seattle Foundation website is seeing a delicious spike in traffic. All of these are good things. Nonprofits are usually below the radar unless there’s a scandal, and so it’s nice to break through with something positive.
And yet… shouldn’t the first and foremost metric of success for GiveBig be whether it can prove that participating in GiveBig causes nonprofits to raise more money than they would have otherwise? Sure, it’s great that GiveBig raised $X million in a single day. But how much of that money was just “time-shifted” from other days in the year, and how much is genuinely “new?” This is a tricky question, but I think it really matters. GiveBig is a big investment for the Seattle Foundation, and participating groups spend a lot of time and energy on it. And so I think it’s important that big investments in fundraising be able to show rigorously measured bottom-line results.
But how? I’ve been doing some thinking about this. It’s a tough problem, and I don’t presume to have solved it. But I wanted to put my ideas out there to start the conversation, and draw in smarter people than me.
Here’s the best approach I can come up with: Create two lists of Seattle nonprofits: ones that actively participated in GiveBig and ones that didn’t. Then, for each set of groups, look at six or seven years of total individual giving numbers from a large group of Seattle nonprofits, let’s say from 2006 – 2012. First look at the data from 2006-2010 (before GiveBig started). Find pairs of groups who had similar individual giving numbers over those years, and showed similar rates of growth. Then, track those pairs of groups for 2011-2012 and see if their individual giving totals diverge. Do groups that were performing similarly before GiveBig started suddenly show differences in fundraising performance?
To be sure, this is not a perfect approach. There could be many reasons why groups’ fundraising performance would diverge. ED or fundraising staff transitions, death of a major donor, etc. But if we have enough pairs of groups, these “random” errors should cancel each other out. Perhaps we won’t be able to match enough pairs of similar groups–I’m guessing that a lot of groups now participate, and relatively few don’t.
Smart friends: how would you approach evaluating whether GiveBig has an impact on total fundraising performance? Can you think of an experimental design that is simpler or better? I’d love to hear your thoughts.
Update: Upon some further reflection, I also think that tracking the number of new donors coming in through GiveBig would be an meaningful indicator of success–and even more valuable if we could get some numbers for the multi-year value of these new donors compared with donors acquired through other sources.