Tag Archives: shared outcomes

Why Geeks (like Me) Promote Transparency

This post was originally published on the Idealware Blog in November of 2009.
Mizukurage.jpg
Public Domain image by Takada

Last week, I shared a lengthy piece that could be summed up as:

“in a world where everyone can broadcast anything, there is no privacy, so transparency is your best defense.”

(Mind you, we’d be dropping a number of nuanced points to do that!)

Transparency, it turns out, has been a bit of a meme in nonprofit blogging circles lately. I was particularly excited by this post by Marnie Webb, one of the many CEO’s at the uber-resource provider and support organization Techsoup Global.

Marnie makes a series of points:

Meaningful shared data, like the Miles Per Gallon ratings on new car stickers or the calorie counts on food packaging help us make better choices;But not all data is as easy to interpret;Nonprofits have continually been challenged to quantify the conditions that their missions address;

Shared knowledge and metrics will facilitate far better dialog and solutions than our individual efforts have;

The web is a great vehicle for sharing, analyzing and reporting on data;

Therefore, the nonprofit sector should start defining and adopting common data formats that support shared analysis and reporting.

I’ve made the case before for shared outcomes reporting, which is a big piece of this. Sharing and transparency aren’t traditional approaches to our work. Historically, we’ve siloed our efforts, even to the point where membership-based organizations are guarded about sharing with other members.

The reason that technologists like Marnie and I end up jumping on this bandwagon is that the tech industry has modeled the disfunction of a siloed approach better than most. early computing was an exercise in cognitive dissonance. If you regularly used Lotus 123, Wordperfect and dBase (three of the most popular business applications circa 1989) on your MS-DOS PC, then hitting “/“, F7 or “.” were the things you needed to know in order to close those applications respectively. For most of my career, I stuck with PCs for home use because I needed compatibility with work, and the Mac operating system, prior to OSX, just couldn’t easily provide that.

The tech industry has slowly and painfully progressed towards a model that competes on the sales and services level, but cooperates on the platform side. Applications, across manufacturers and computing platforms, function with similar menus and command sequences. Data formats are more commonly shared. Options are available for saving in popular, often competitive formats (as in Word’s “Save As” offering Wordperfect and Lotus formats). The underlying protocols that fuel modern operating systems and applications are far more standardized. Windows, Linux and MacOS all use the same technologies to manage users and directories, network systems and communicate with the world. Microsoft, Google, Apple and others in the software world are embracing open standards and interoperability. This makes me, the customer, much less of an innocent bystander who is constantly sniped by their competitive strategies.

So how does this translate to our social service, advocacy and educational organizations? Far too often, we frame cooperation as the antithesis to competition. That’s a common, but crippling mistake. The two can and do coexist in almost every corner of our lives. We need to adopt a “rising tide” philosophy that values the work that we can all do together over the work that we do alone, and have some faith that the sustainable model is an open, collaborative one. Looking at each opportunity to collaborate from the perspective of how it will enhance our ability to accomplish our public-serving goals. And trusting that this won’t result in the similarly-focused NGO down the street siphoning off our grants or constituents.

As Marnie is proposing, we need to start discussing and developing data standards that will enable us to interoperate on the level where we can articulate and quantify the needs that our mission-focused organizations address. By jointly assessing and learning from the wealth of information that we, as a community of practice collect, we can be far more effective. We need to use that data to determine our key strategies and best practices. And we have to understand that, as long as we’re treating information as competitive data; as long as we’re keeping it close to our vests and looking at our peers as strictly competitors, the fallout of this cold war is landing on the people that we’re trying to serve. We owe it to them to be better stewards of the information that lifts them out of their disadvantaged conditions.

Paving the Road – a Shared Outcomes Success Story

This post was originally published on the Idealware blog in July of 2009.

I recently wrote about the potential for shared outcome reporting among nonprofits and the formidable challenges to getting there. This topic hits a chord for those of us who believe strongly that proper collection, sharing and analysis of the data that represents our work can significantly improve our performance and impact.

Shared outcome reporting allows an organization to both benchmark their effectiveness with peers, and learn from each others’ successful and failed strategies. If your most effective method of analyzing effectiveness is year to year comparisons, you’re only measuring a portion of the elephant. You don’t practice your work in a vacuum; why analyze it in one?

But, as I wrote, for many, the investment in sharing outcomes is a hard sell. Getting there requires committing scarce time, labor and resources to the development of the metrics, collection of data, and input; trust and competence in the technology; and partnering with our peers, who, in many cases, are also our competitors. And, in conditions where just keeping up with the established outcome reporting required for grant compliance is one of our greater challenges, envisioning diving into broader data collection, management and integration projects looks very hard to justify.

So let’s take a broader look this time at the justifications, rather than the challenges.

Success Measures is a social enterprise in DC that provides tools and consulting to organizations that want to evaluate their programs and services and use the resulting data. From their website:

Success Measures®, a social enterprise at NeighborWorks® America is an innovative participatory outcome evaluation approach that engages community stakeholders in the evaluation process and equips them with the tools they need to document outcomes, measure impact and inform change.

To accomplish this, in 2000, they set up an online repository of surveying and evaluation tools that can be customized by the participant to meet their needs. After determining what it is that they want to measure, participants work with their constituencies to gather baseline data. Acting on that data, they can refine their programs and address needs, then, a year or two later, use the same set of tools to re-survey and learn from the comparative data. Success Measures supplements the tools collection with training, coaching, and consulting to insure that their participants are fully capable of benefiting from their services. And, with permission, they provide cross-client metrics; the shared outcomes reporting that we’re talking about.

The tools work on sets of indicators, and they provide pre-defined sets of indicators as well as allowing for custom items. The existing sets cover common areas: Affordable housing; community building; economic development; race, class and community. Sets currently under development include green building/sustainable communities; community stabilization; measuring outcomes of asset programs; and measuring value of intermediary services.

Note that this resources nonprofits on both sides of the equation — they not only provide the shared metrics and accompanying insight into effective strategies for organizations that do what you do; they also provide the tools. This addresses one of the primary challenges, which is that most nonprofits don’t have the skills and staff required simply to create the surveying tools.

Once I understood what Success Measures was offering, my big question was, “how did you get any clients?” They had good answers. They actually engage more with the funders than the nonprofits, selling the foundations on the value of the data, and then sending them to their grantees with the recommendation. This does two important things:

  • First, it provides a clear incentive to the nonprofits. The funders aren’t just saying “prove that you’re effective”; they’re saying “here’s a way that you can quantify your success. The funding will follow.
  • Second, it provides a standardized reporting structure — with pre-developed tools and support — to the nonprofits. In my experience, having worked for an organization with multiple city, state and federal grants and funded programs, keeping up with the diverse requirements of each funding agency was an administrative nightmare.

So, if the value of comparative, cross-sector metrics isn’t reason enough to justify it, maybe the value of pre-built data collection tools is. Or, maybe the value of standardized reporting for multiple funding sources has a clear cost benefit attached. Or, maybe you’d appreciate a relationship with your funders that truly rewards you with grants based on your effectiveness. Success Measures has a model for all of the above.

The Road to Shared Outcomes

This post originally appeared on the Idealware Blog in May of 2009.

At the recent Nonprofit Technology Conference, I attended a somewhat misleadingly titled session called “Cloud Computing: More than just IT plumbing in the sky“. The cloud computing issues discussed were nothing like the things we blog about here (see Michelle’s and my recent “SaaS Smackdown” posts). Instead, this session was really a dive into the challenges and benefits of publishing aggregated nonprofit metrics. Steve Wright of the Salesforce Foundation led the panel, along with Lucy Bernholz and Lalitha Vaidyanathan. The session was video-recorded; you can watch it here.

Steve, Lucy and Lalithia painted a pretty visionary picture of what it would be like if all nonprofits standardized and aggregated their outcome reporting on the web. Lalithia had a case study that hit on the key levels of engagement: shared measurement systems; comparative performance measurement and a baked in learning process. Steve made it clear that this is an iterative process that changes as it goes — we learn from each iteration and measure more effectively, or more appropriately for the climate, each time.

I’m blogging about this because I’m with them — this is an important topic, and one that gets lost amidst all of the social media and web site metrics focus in our nptech community. We’re big on measuring donations, engagement, and the effectiveness of our outreach channels, and I think that’s largely because there are ample tools and extra-community engagement with these metrics — every retailer wants to measure the effectiveness of their advertising and their product campaigns as well. Google has a whole suite of analytics available, as do other manufacturers. But outcomes measurement is more particular to our sector, and the tools live primarily in the reporting functionality of our case and client management systems. They aren’t nearly as ubiquitous as the web/marketing analysis tools, and they aren’t, for the most part, very flexible or sophisticated.

Now, I wholly subscribe to the notion that you will never get anywhere if you can’t see where you’re going, so I appreciate how Steve and crew articulated that this vision of shared outcomes is more than just a way to report to our funders; it’s also a tool that will help us learn and improve our strategies. Instead of seeing how your organization has done, and striving to improve upon your prior year’s performance, shared metrics will offer a window into other’s tactics, allowing us all to learn from each others’ successes and mistakes.

But I have to admit to being a bit overwhelmed by the obstacles standing between us and these goals. They were touched upon in the talk, but not heavily addressed.

  • Outcome management is a nightmare for many nonprofits, particularly those who rely heavily on government and foundation funding. My brief forays into shared outcome reporting were always welcomed at first, then shot down completely, the minute it became clear that joint reporting would require standardization of systems and compromise on the definitions. Our case management software was robust enough to output whatever we needed, but many of our partners were in Excel or worse. Even if they’d had good systems, they didn’t have in-house staff that knew how to program them.
  • Outcomes are seen by many nonprofit executives as competitive data. If we place ours in direct comparison with the similar NPO down the street, mightn’t we just be telling our funders that they’re backing the wrong horse?
  • The technical challenges are huge — of the NPOs that actually have systems that tally this stuff, the data standards are all over the map, and the in-house skill, as well as time and availability to produce them, is generally thin. You can’t share metrics if you don’t have the means to produce them.

A particular concern is that all metrics are fairly subjective, as can happen when the metrics produced are determined more by the funding requirements than the NPO’s own standards. When I was at SF Goodwill, our funders were primarily concerned with job placements and wages as proof of our effectiveness. But our mission wasn’t one of getting people jobs; it was one of changing lives, so the metrics that we spent the most work on gathering were only partially reflective of our success – more outputs than outcomes. Putting those up against the metrics of an org with different funding, different objectives and different reporting tools and resources isn’t exactly apples to apples.

The benefits of shared metrics that Steve and crew held up is a worthwhile dream, but, to get there, we’re going to have to do more than hold up a beacon saying “This is the way”. We’re going to have to build and pave the road, working through all of the territorial disputes and diverse data standards in our path. Funders and CEOs are going to have to get together and agree that, in order to benefit from shared reporting, we’ll have to overcome the fact that these metrics are used as fodder in the battles for limited funding. Nonprofits and the ecosystem around them are going to have to build tools and support the art of data management required. These aren’t trivial challenges.

I walked into the session thinking that we’d be talking about cloud computing; the migration of our internal servers to the internet. Instead, I enjoyed an inspiring conversation that took place, as far as I’m concerned, in the clouds. We have a lot of work to do on the ground before we can get there.