A few weeks ago, I had the pleasure of being a guest blogger on a new site created by Duke University’s Center for Strategic Philanthropy and Civil Society, called the Intrepid Philanthropist. Inspired by the blog’s name, I decided to challenge some of those inside and outside of philanthropy, who, in my view, paint the sector with too broad a brush. You can read all five of my posts by downloading this PDF version.
While agreeing that the nonprofit sector must dramatically step up its effectiveness, I suggested in my Duke posts that it is important not to lose our appreciation of the distinct role of the sector by simply assuming that “business thinking” has all the answers. I also suggested that we need to recognize the successes (and failures) of the past – and present – from which we can learn. The fact is, there are excellent examples of effective philanthropic strategy that are older than any of us.
I got an overwhelmingly positive reaction to my blogs and received many emails thanking me for what I wrote – and especially for the specific examples I cited of foundation and nonprofit impact. On the other hand, those whose articles or books I critiqued reacted in a strongly negative way, especially Dan Pallotta, the author of Uncharitable – who took umbrage (to say the least) at my critique, leading to a lively exchange both on the Duke blog and on his blog. Following that exchange, I am even more convinced that his worldview rests on a deification of markets that isn’t grounded in reality.
But I do not want my debate with Pallotta to obscure the fact that, on one very important point, we agree. That point is this: there remains far too much emphasis on administrative cost ratios as the gauge by which we judge nonprofit performance.
Too many donors seem to focus on the percentage of a nonprofit’s budget going toward “program” without sufficient attention paid toward the impact created. In its worst extremes, some donors act as if they expect nonprofits to operate without the infrastructure and talent that are, in fact, necessary for sustained effectiveness.
Foundations can be a big part of this problem in the way they fund and the messages they send nonprofits. But they can also be victims of the same kind of overly simplistic thinking in the way their boards assess their own performance.
The challenge, of course, is that performance assessment is harder in the nonprofit sector because there is no universal metric – no analog to profits – and there never will be. (You save a rainforest in Brazil, I increase graduation rates 10 percent in Boston: we could spend the rest of our lives in dueling calculations over who achieved more impact.)
Performance assessment is especially challenging for foundations, because they are one step removed from the impact they seek. Without easily accessible performance data, foundation boards gravitate to what is available, quantifiable, and comparative. The easiest to obtain data that fits this bill are investment returns and administrative cost ratios.
It’s not that boards don’t want better data that is more connected to foundation strategies for impact: our research demonstrates that, to the contrary, they do. It’s that this data often isn’t made available, because it’s hard to get. Our newest report, Essentials of Foundation Strategy, details just how significant the performance assessment challenges are.
So what do boards do? They focus, naturally, on what they can get their hands on.
Let me offer just one simple example. One large foundation whose grantees we had surveyed had a board whose members were fixated on its administrative spending, which was high relative to peer funders. They were targeting areas to cut, and had identified the foundation’s research efforts as a good place to start.
But when we presented a Grantee Perception Report (GPR) based on our survey results, they learned that the foundation was among the highest rated foundations among all those whose grantees we have surveyed on dimensions such as “advancing knowledge” in the foundation’s field of funding. Grantees also wrote eloquently, in response to open-ended questions, about the impact of the foundation’s research on improving understanding of what was working and what wasn’t – saying that it was more valuable than the grants they received (really, they did).
The Board, recognizing the impact of these efforts, decided to preserve the spending on research.
People often ask me, “What kind of changes do foundations make based on CEP’s assessment tools?” It’s a good question, and I have lots to say in response. But sometimes just as important as what has changed in response to data is what hasn’t – what is preserved as a result of an understanding about its effect.
Focusing on administrative spending ratios in the absence of other data can lead to the elimination of work that is having a substantially positive impact that more than justifies its cost. We can and must do better – by pushing for indicators of foundation effectiveness that connect to goals and strategies and allow for a more holistic assessment of performance.