This post was originally published on the Idealware Blog in December of 2009.
Last week, GuideStar, Charity Navigator, and three other nonprofit assessment and reporting organizations made a huge announcement: the metrics that they track are about to change. Instead of scoring organizations on an “overhead bad!” scale, they will scrap the traditional metrics and replace them with ones that measure an organization’s effectiveness.
- The new metrics will assess:
- Financial health and sustainability;
- Accountability, governance and transparency; and
This is very good news. That overhead metric has hamstrung serious efforts to do bold things and have higher impact. An assessment that is based solely on annualized budgetary efficiency precludes many options to make long-term investments in major strategies. For most nonprofits, taking a year to staff up and prepare for a major initiative would generate a poor Charity Navigator score. A poor score that is prominently displayed to potential donors.
Assuming that these new metrics will be more tolerant of varying operational approaches and philosophies, justified by the outcomes, this will give organizations a chance to be recognized for their work, as opposed to their cost-cutting talents. But it puts a burden on those same organizations to effectively represent that work. I’ve blogged before (and will blog again) on our need to improve our outcome reporting and benchmark with our peers. Now, there’s a very real danger that neglecting to represent your success stories with proper data will threaten your ability to muster financial support. You don’t want to be great at what you do, but have no way to show it.
More to the point, the metrics that value social organizational effectiveness need to be developed by a broad community, not a small group or segment of that community. The move by Charity Navigator and their peers is bold, but it’s also complicated. Nonprofit effectiveness is a subjective thing. When I worked for a workforce development agency, we had big questions about whether our mission was served by placing a client in a job, or if that wasn’t an outcome as much as an output, and the real metric was tied to the individual’s long-term sustainability and recovery from the conditions that had put them in poverty.
Certainly, a donor, a watchdog, a funder a, nonprofit executive and a nonprofit client are all going to value the work of a nonprofit differently. Whose interests will be represented in these valuations?
So here’s what’s clear to me:
– Developing standardized metrics, with broad input from the entire community, will benefit everyone.
– Determining what those metrics are and should be will require improvements in data management and reporting systems. It’s a bit of a chicken and egg problem, as collecting the data wis a precedent to determining how to assess it, but standardizing the data will assist in developing the data systems.
– We have to share our outcomes and compare them in order to develop actual standards. And there are real opportunities available to us if we do compare our methodologies and results.
This isn’t easy. This will require that NPO’s who have have never had the wherewith-all to invest in technology systems to assess performance do so. But, I maintain, if the world is going to start rating your effectiveness on more than the 990, that’s a threat that you need to turn into an opportunity. You can’t afford not to.
And I look to my nptech community, including Idealware, NTEN, Techsoup, Aspiration and many others — the associations, formal, informal, incorporated or not, who advocate for and support technology in the nonprofit sector — to lead this effort. We have the data systems expertise and the aligned missions to lead the project of defining shared outcome metrics. We’re looking into having initial sessions on this topic at the 2010 Nonprofit Technology Conference.
As the world starts holding nonprofits up to higher standards, we need a common language that describes those standards. It hasn’t been written yet. Without it, we’ll escape the limited, Form 990 assessments to something that might equally fail to reflect our best efforts and outcomes.