CEP’s Essentials of Foundation Strategy concludes, “Assessment of results against strategies remains a significant challenge for foundations: staff struggle to determine the right data to collect and how to collect it.”
This is an important finding, and explanations are well worth exploring. The report suggests several possible reasons – technical challenges, inadequate resources and support, and lack of grantee capacity and skill. I’d like to explore another possibility: The undue attraction to quantitative analysis.
The recipe for foundations is simple and well known: assess the environment, choose a goal, align resources, implement programs, measure performance, adjust. Repeat as needed until goal is achieved.
Indeed, every foundation’s recipe will have a different mix of ingredients, but maybe we need to think about the limits of quantitative measurement. As much as I value quantitative data to assess results, the proliferation of such work and its unintended consequences suggest we consider (to extend the recipe analogy) if the basic philanthropic ingredients contain a dash too much fascination with numbers.
It might seem impolite to raise this question on CEP’s blog, given that the heading on this new Web site begins with “better data . . .”! But I know CEP is committed to being a learning organization, which includes fostering a supportive environment for raising different points of view. So – what better place to ask about the seemingly unending efforts to quantify performance at every level to assess impact than on the Web site of an organization devoted to empirical analysis?
What are the possible downsides of quantitative performance measurement? One is that doing this work has its own costs, so spending resources on this ingredient has to be done as effectively as possible, and weighed against spending for other ingredients in the philanthropic recipe. Another is the mistaken belief that quantitative measurement is the only type of measurement.
Even when qualitative measurement is considered, it is often viewed as inferior to quantitative analysis. Yet the heart of measurement is comparative assessment, and this doesn’t require quantitative analyses. Think of the meaning of “taking the measure of a person” or “to speak in measured terms.” These phrases denote the sense of measurement as judgment or comparison.
A related risk is the lack of fit between the measure of performance and what is being measured. What is the nature of philanthropic practice that we want to measure? Are foundations measuring their own performance (i.e., strategy, selection of goals) or their grantees’ performance (project results, program impact)? One conception of philanthropy is that it is craft. How amenable to quantitative measurement is the craft of philanthropic practice? At what levels?
Thinking about the limits of quantitative performance measurement suggests several things that might help foundations improve assessing their impact. One is to take into account the costs and benefits to everyone involved when undertaking this work, and what is most important to learn to accomplish strategic goals.
For example, it is probably more valuable to put performance assessment resources to broader levels of performance (e.g., is an overall strategy working?) than to discrete grantee projects. Focusing on individual projects inhibits an understanding of how projects fit together over time and relate to the environment in which they operate. In turn, this reinforces a focus on a project’s internal risk – implementation – and away from strategic and design risks.
Note that the failure of implementation puts the spotlight on grantees; failure of strategy and, to some extent, design, puts the spotlight on foundations. An excellent example of a higher-level, strategic assessment is the qualitative assessment that Patti Patrizi and colleagues did on RWJF’s end-of-life grant-making from 1996-2005.
In preparing for performance assessment, thinking carefully about design, the limitations of the data to be produced, and how the data produced will be used before committing resources may also minimize an overreliance on quantitative assessment.
In undertaking performance assessment, it is useful to think of the kinds of comparisons (quantitative or qualitative) that fit with the goals. Case studies, like the one CEP published on the Stuart Foundation, are excellent ways of using a comparative framework – in this case to a model of bringing about social change – to assess and learn about performance.
Assessing performance rigorously is a critical ingredient in strategic philanthropy. Questioning the limits of quantitative performance assessment should not be used as a reason not to do performance assessment – just the opposite. The field needs more resources devoted to this ingredient, but they need to be used as wisely as possible.
The challenge is getting the right mixture of ingredients, including types of measurement, for each foundation. And as long as we keep in mind that all data are not numeric, CEP’s banner that begins with “better data . . .” is the start of a good recipe for philanthropic chefs.
Bob Hughes is an independent consultant on strategy and organizational learning in health and philanthropy.
Disclaimers and Disclosures: The views expressed in the CEP blog by guest bloggers are entirely their own and do not necessarily reflect the opinions of the Center for Effective Philanthropy.