The Road to Shared Outcomes

This post originally appeared on the Idealware Blog in May of 2009.

At the recent Nonprofit Technology Conference, I attended a somewhat misleadingly titled session called “Cloud Computing: More than just IT plumbing in the sky“. The cloud computing issues discussed were nothing like the things we blog about here (see Michelle’s and my recent “SaaS Smackdown” posts). Instead, this session was really a dive into the challenges and benefits of publishing aggregated nonprofit metrics. Steve Wright of the Salesforce Foundation led the panel, along with Lucy Bernholz and Lalitha Vaidyanathan. The session was video-recorded; you can watch it here.

Steve, Lucy and Lalithia painted a pretty visionary picture of what it would be like if all nonprofits standardized and aggregated their outcome reporting on the web. Lalithia had a case study that hit on the key levels of engagement: shared measurement systems; comparative performance measurement and a baked in learning process. Steve made it clear that this is an iterative process that changes as it goes — we learn from each iteration and measure more effectively, or more appropriately for the climate, each time.

I’m blogging about this because I’m with them — this is an important topic, and one that gets lost amidst all of the social media and web site metrics focus in our nptech community. We’re big on measuring donations, engagement, and the effectiveness of our outreach channels, and I think that’s largely because there are ample tools and extra-community engagement with these metrics — every retailer wants to measure the effectiveness of their advertising and their product campaigns as well. Google has a whole suite of analytics available, as do other manufacturers. But outcomes measurement is more particular to our sector, and the tools live primarily in the reporting functionality of our case and client management systems. They aren’t nearly as ubiquitous as the web/marketing analysis tools, and they aren’t, for the most part, very flexible or sophisticated.

Now, I wholly subscribe to the notion that you will never get anywhere if you can’t see where you’re going, so I appreciate how Steve and crew articulated that this vision of shared outcomes is more than just a way to report to our funders; it’s also a tool that will help us learn and improve our strategies. Instead of seeing how your organization has done, and striving to improve upon your prior year’s performance, shared metrics will offer a window into other’s tactics, allowing us all to learn from each others’ successes and mistakes.

But I have to admit to being a bit overwhelmed by the obstacles standing between us and these goals. They were touched upon in the talk, but not heavily addressed.

  • Outcome management is a nightmare for many nonprofits, particularly those who rely heavily on government and foundation funding. My brief forays into shared outcome reporting were always welcomed at first, then shot down completely, the minute it became clear that joint reporting would require standardization of systems and compromise on the definitions. Our case management software was robust enough to output whatever we needed, but many of our partners were in Excel or worse. Even if they’d had good systems, they didn’t have in-house staff that knew how to program them.
  • Outcomes are seen by many nonprofit executives as competitive data. If we place ours in direct comparison with the similar NPO down the street, mightn’t we just be telling our funders that they’re backing the wrong horse?
  • The technical challenges are huge — of the NPOs that actually have systems that tally this stuff, the data standards are all over the map, and the in-house skill, as well as time and availability to produce them, is generally thin. You can’t share metrics if you don’t have the means to produce them.

A particular concern is that all metrics are fairly subjective, as can happen when the metrics produced are determined more by the funding requirements than the NPO’s own standards. When I was at SF Goodwill, our funders were primarily concerned with job placements and wages as proof of our effectiveness. But our mission wasn’t one of getting people jobs; it was one of changing lives, so the metrics that we spent the most work on gathering were only partially reflective of our success – more outputs than outcomes. Putting those up against the metrics of an org with different funding, different objectives and different reporting tools and resources isn’t exactly apples to apples.

The benefits of shared metrics that Steve and crew held up is a worthwhile dream, but, to get there, we’re going to have to do more than hold up a beacon saying “This is the way”. We’re going to have to build and pave the road, working through all of the territorial disputes and diverse data standards in our path. Funders and CEOs are going to have to get together and agree that, in order to benefit from shared reporting, we’ll have to overcome the fact that these metrics are used as fodder in the battles for limited funding. Nonprofits and the ecosystem around them are going to have to build tools and support the art of data management required. These aren’t trivial challenges.

I walked into the session thinking that we’d be talking about cloud computing; the migration of our internal servers to the internet. Instead, I enjoyed an inspiring conversation that took place, as far as I’m concerned, in the clouds. We have a lot of work to do on the ground before we can get there.

Comments are closed.