This post was originally published on the Idealware Blog in December of 2009.
Last week, I reported that Nonprofit assessors like Charity Navigator and Guidestar will be moving to a model of judging effectiveness (as opposed to thriftiness). The title of my post drew some criticism. People far more knowledgeable than I am on these topics questioned my description of this as a “sea change”, and I certainly get their point. Sure, the intention to do a fair job of judging Nonprofits is sincere; but the task is daunting. As with many such efforts, we might well wind up with something that isn’t a sea change at all, but, rather, a modified version of what we have today that includes some info about mission effectiveness, but still boils down to a financial assessment.
Why would this happen? Simple. Because metrics are numbers: ratios, averages, totals. It’s easy to make metrics from financial data. It’s very difficult to make them out of less quantifiable things, such as measuring how successfully one organization changed the world; protected the planet; or stopped the spread of a deadly disease.
I used to work for an org whose mission was to end poverty in the San Francisco Bay Area. And, sure enough, at the time, poverty was becoming far less prevalent in San Francisco. So could we be judged as successful? Could we grab the 2005 versus 2000 poverty statistics and claim the advances as our outcomes? Of course not. The reduction in poverty had far more to do with gentrification during the dotcom and real estate booms than our efforts. Poverty wasn’t reduced at all; it was just displaced. And our mission wasn’t to move all of the urban poor to the suburbs; it was to bring them out of poverty.
So the announcement that our ratings will now factor in mission effectiveness and outcomes could herald something worse than we have today. The dangerous scenario goes like this:
- Charity Navigator, Guidestar, et al, determine what additional info they need to request from nonprofits in order to measure outcomes.
- They make that a requirement; nonprofits now have to jump through those hoops.
- The data they collect is far too generalized and subjective to mean much; they draw conclusions anyway, based more on how easy it is to call something a metric than how accurate or valuable that metric is.
- NPOs now have more reporting requirements and no better representation.
So, my amended title: “We Need A Sea Change In The Way That Our Organizations Are Assessed”.
I’m harping on this topic because I consider it a call to action; a chance to make sure that this self-assessment by the assessors is an opportunity for us, not a threat. We have to get the right people at the table to develop standardized outcome measurements that the assessing organizations can use. They can’t develop these by themselves. And we need to use our influence in the nonprofit software development community to make sure that NPOs have software that can generate these reports.