Tag Archives: standards

Get Ready For A Sea Change In Nonprofit Assessment Metrics

This post was originally published on the Idealware Blog in December of 2009.

watchdogs.png

Last week, GuideStar, Charity Navigator, and three other nonprofit assessment and reporting organizations made a huge announcement: the metrics that they track are about to change.  Instead of scoring organizations on an “overhead bad!” scale, they will scrap the traditional metrics and replace them with ones that measure an organization’s effectiveness.

  • The new metrics will assess:
  • Financial health and sustainability;
  • Accountability, governance and transparency; and
  • Outcomes.

This is very good news. That overhead metric has hamstrung serious efforts to do bold things and have higher impact. An assessment that is based solely on annualized budgetary efficiency precludes many options to make long-term investments in major strategies.  For most nonprofits, taking a year to staff up and prepare for a major initiative would generate a poor Charity Navigator score. A poor score that is prominently displayed to potential donors.

Assuming that these new metrics will be more tolerant of varying operational approaches and philosophies, justified by the outcomes, this will give organizations a chance to be recognized for their work, as opposed to their cost-cutting talents.  But it puts a burden on those same organizations to effectively represent that work.  I’ve blogged before (and will blog again) on our need to improve our outcome reporting and benchmark with our peers.  Now, there’s a very real danger that neglecting to represent your success stories with proper data will threaten your ability to muster financial support.  You don’t want to be great at what you do, but have no way to show it.

More to the point, the metrics that value social organizational effectiveness need to be developed by a broad community, not a small group or segment of that community. The move by Charity Navigator and their peers is bold, but it’s also complicated.  Nonprofit effectiveness is a subjective thing. When I worked for a workforce development agency, we had big questions about whether our mission was served by placing a client in a job, or if that wasn’t an outcome as much as an output, and the real metric was tied to the individual’s long-term sustainability and recovery from the conditions that had put them in poverty.

Certainly, a donor, a watchdog, a funder a, nonprofit executive and a nonprofit client are all going to value the work of a nonprofit differently. Whose interests will be represented in these valuations?

So here’s what’s clear to me:

– Developing standardized metrics, with broad input from the entire community, will benefit everyone.

– Determining what those metrics are and should be will require improvements in data management and reporting systems. It’s a bit of a chicken and egg problem, as collecting the data wis a precedent to determining how to assess it, but standardizing the data will assist in developing the data systems.

– We have to share our outcomes and compare them in order to develop actual standards.  And there are real opportunities available to us if we do compare our methodologies and results.

This isn’t easy. This will require that NPO’s who have have never had the wherewith-all to invest in technology systems to assess performance do so.  But, I maintain, if the world is going to start rating your effectiveness on more than the 990, that’s a threat that you need to turn into an opportunity.  You can’t afford not to.

And I look to my nptech community, including Idealware, NTEN, Techsoup, Aspiration and many others — the associations, formal, informal, incorporated or not, who advocate for and support technology in the nonprofit sector — to lead this effort.  We have the data systems expertise and the aligned missions to lead the project of defining shared outcome metrics.  We’re looking into having initial sessions on this topic at the 2010 Nonprofit Technology Conference.

As the world starts holding nonprofits up to higher standards, we need a common language that describes those standards.  It hasn’t been written yet.  Without it, we’ll escape the limited, Form 990 assessments to something that might equally fail to reflect our best efforts and outcomes.

Why Geeks (like Me) Promote Transparency

This post was originally published on the Idealware Blog in November of 2009.
Mizukurage.jpg
Public Domain image by Takada

Last week, I shared a lengthy piece that could be summed up as:

“in a world where everyone can broadcast anything, there is no privacy, so transparency is your best defense.”

(Mind you, we’d be dropping a number of nuanced points to do that!)

Transparency, it turns out, has been a bit of a meme in nonprofit blogging circles lately. I was particularly excited by this post by Marnie Webb, one of the many CEO’s at the uber-resource provider and support organization Techsoup Global.

Marnie makes a series of points:

Meaningful shared data, like the Miles Per Gallon ratings on new car stickers or the calorie counts on food packaging help us make better choices;But not all data is as easy to interpret;Nonprofits have continually been challenged to quantify the conditions that their missions address;

Shared knowledge and metrics will facilitate far better dialog and solutions than our individual efforts have;

The web is a great vehicle for sharing, analyzing and reporting on data;

Therefore, the nonprofit sector should start defining and adopting common data formats that support shared analysis and reporting.

I’ve made the case before for shared outcomes reporting, which is a big piece of this. Sharing and transparency aren’t traditional approaches to our work. Historically, we’ve siloed our efforts, even to the point where membership-based organizations are guarded about sharing with other members.

The reason that technologists like Marnie and I end up jumping on this bandwagon is that the tech industry has modeled the disfunction of a siloed approach better than most. early computing was an exercise in cognitive dissonance. If you regularly used Lotus 123, Wordperfect and dBase (three of the most popular business applications circa 1989) on your MS-DOS PC, then hitting “/“, F7 or “.” were the things you needed to know in order to close those applications respectively. For most of my career, I stuck with PCs for home use because I needed compatibility with work, and the Mac operating system, prior to OSX, just couldn’t easily provide that.

The tech industry has slowly and painfully progressed towards a model that competes on the sales and services level, but cooperates on the platform side. Applications, across manufacturers and computing platforms, function with similar menus and command sequences. Data formats are more commonly shared. Options are available for saving in popular, often competitive formats (as in Word’s “Save As” offering Wordperfect and Lotus formats). The underlying protocols that fuel modern operating systems and applications are far more standardized. Windows, Linux and MacOS all use the same technologies to manage users and directories, network systems and communicate with the world. Microsoft, Google, Apple and others in the software world are embracing open standards and interoperability. This makes me, the customer, much less of an innocent bystander who is constantly sniped by their competitive strategies.

So how does this translate to our social service, advocacy and educational organizations? Far too often, we frame cooperation as the antithesis to competition. That’s a common, but crippling mistake. The two can and do coexist in almost every corner of our lives. We need to adopt a “rising tide” philosophy that values the work that we can all do together over the work that we do alone, and have some faith that the sustainable model is an open, collaborative one. Looking at each opportunity to collaborate from the perspective of how it will enhance our ability to accomplish our public-serving goals. And trusting that this won’t result in the similarly-focused NGO down the street siphoning off our grants or constituents.

As Marnie is proposing, we need to start discussing and developing data standards that will enable us to interoperate on the level where we can articulate and quantify the needs that our mission-focused organizations address. By jointly assessing and learning from the wealth of information that we, as a community of practice collect, we can be far more effective. We need to use that data to determine our key strategies and best practices. And we have to understand that, as long as we’re treating information as competitive data; as long as we’re keeping it close to our vests and looking at our peers as strictly competitors, the fallout of this cold war is landing on the people that we’re trying to serve. We owe it to them to be better stewards of the information that lifts them out of their disadvantaged conditions.

Paving the Road – a Shared Outcomes Success Story

This post was originally published on the Idealware blog in July of 2009.

I recently wrote about the potential for shared outcome reporting among nonprofits and the formidable challenges to getting there. This topic hits a chord for those of us who believe strongly that proper collection, sharing and analysis of the data that represents our work can significantly improve our performance and impact.

Shared outcome reporting allows an organization to both benchmark their effectiveness with peers, and learn from each others’ successful and failed strategies. If your most effective method of analyzing effectiveness is year to year comparisons, you’re only measuring a portion of the elephant. You don’t practice your work in a vacuum; why analyze it in one?

But, as I wrote, for many, the investment in sharing outcomes is a hard sell. Getting there requires committing scarce time, labor and resources to the development of the metrics, collection of data, and input; trust and competence in the technology; and partnering with our peers, who, in many cases, are also our competitors. And, in conditions where just keeping up with the established outcome reporting required for grant compliance is one of our greater challenges, envisioning diving into broader data collection, management and integration projects looks very hard to justify.

So let’s take a broader look this time at the justifications, rather than the challenges.

Success Measures is a social enterprise in DC that provides tools and consulting to organizations that want to evaluate their programs and services and use the resulting data. From their website:

Success Measures®, a social enterprise at NeighborWorks® America is an innovative participatory outcome evaluation approach that engages community stakeholders in the evaluation process and equips them with the tools they need to document outcomes, measure impact and inform change.

To accomplish this, in 2000, they set up an online repository of surveying and evaluation tools that can be customized by the participant to meet their needs. After determining what it is that they want to measure, participants work with their constituencies to gather baseline data. Acting on that data, they can refine their programs and address needs, then, a year or two later, use the same set of tools to re-survey and learn from the comparative data. Success Measures supplements the tools collection with training, coaching, and consulting to insure that their participants are fully capable of benefiting from their services. And, with permission, they provide cross-client metrics; the shared outcomes reporting that we’re talking about.

The tools work on sets of indicators, and they provide pre-defined sets of indicators as well as allowing for custom items. The existing sets cover common areas: Affordable housing; community building; economic development; race, class and community. Sets currently under development include green building/sustainable communities; community stabilization; measuring outcomes of asset programs; and measuring value of intermediary services.

Note that this resources nonprofits on both sides of the equation — they not only provide the shared metrics and accompanying insight into effective strategies for organizations that do what you do; they also provide the tools. This addresses one of the primary challenges, which is that most nonprofits don’t have the skills and staff required simply to create the surveying tools.

Once I understood what Success Measures was offering, my big question was, “how did you get any clients?” They had good answers. They actually engage more with the funders than the nonprofits, selling the foundations on the value of the data, and then sending them to their grantees with the recommendation. This does two important things:

  • First, it provides a clear incentive to the nonprofits. The funders aren’t just saying “prove that you’re effective”; they’re saying “here’s a way that you can quantify your success. The funding will follow.
  • Second, it provides a standardized reporting structure — with pre-developed tools and support — to the nonprofits. In my experience, having worked for an organization with multiple city, state and federal grants and funded programs, keeping up with the diverse requirements of each funding agency was an administrative nightmare.

So, if the value of comparative, cross-sector metrics isn’t reason enough to justify it, maybe the value of pre-built data collection tools is. Or, maybe the value of standardized reporting for multiple funding sources has a clear cost benefit attached. Or, maybe you’d appreciate a relationship with your funders that truly rewards you with grants based on your effectiveness. Success Measures has a model for all of the above.

Biting the Hand Part 2

This article was originally posted on the Idealware Blog in October of 2008.

This is part two of a three part rumination on Microsoft.  Today I’m discussing their programming environment, as opposed to the open source alternatives that most nonprofits would be likely to adopt instead.  Part one, on Windows, is here:http://www.idealware.org/blog/2008/10/biting-hand-that-bites-me-as-it-feeds.html

Imposing Standards

In the early days of personal computing, there were a number of platforms – IBM PC, Apple Macintosh, Amiga, Commodore, Leading Edge… but the first two were the primary ones getting any attention from businesses. The PC was geeky, with it’s limited command line interface; the Macintosh was cool with it’s graphics. But two things put the PC on top. One was Dan Bricklin’s VisiCalc, the first spreadsheet.  A computer is a novelty if you have no use for it, and VisiCalc, as it’s modern equivalents are today, was extremely useful. But the bigger reason why the PC beat out the Mac so thoroughly was that the Mac had a strict set of rules that had to be followed in order to program for it, whereas anyone could do pretty much anything on a PC.  If you knewAssembler, the programming language that spoke to the machine, you could start there and create whatever you wanted, with no one at IBM telling you with languages and libraries to use, or what you were allowed or not allowed to do.  As Windows has matured and gained the bulk of the desktop operating system market, Microsoft has started emulating Apple, raising the standards and requirements for Windows programming in ways that make it far less appealing to developers.

Unlike the early days, when no one had much market share, Windows is now the standard  business platform, so there are a lot more reasons to play by whatever rules Microsoft might impose.  So, today, being a Windows programmer is a lot like being a Mac programmer.  If you’re going to have the compatibility and certification that is required, you’re going to follow guidelines, use the shared libraries, and probably program in the same tools as every other Windows programmer.  The benefit is standardization and uniformity, things that business computer users really appreciate.

Accordingly, the Microsoft platform, which used to run on pretty much all PCs, now faces competition from Linux and other Unix variants, and for much the same reasons that IBM beat out Apple in those early days. What appeals to Java, PHP, Rails and other open source developers is very much the same thing that brought developers to the PC in the first place, and Microsoft’s arguments for sticking to their platform are much like Apple’s – “it’s safer, it’s well-supported, it’s standardized, so a lot of the work is done for you”.  I would argue with each of these claims.

Is it Safer?

The formal programming environment is supposedly more secure, with compiled code and stricter encoding/encryption of data in their web services model.  But it seems that the open source model, with, for the major apps, a multitude of eyes on the code, isquicker to fnd and fix security glitches.  Microsoft defenders will argue that, because Microsoft lives in a commercial ecosystem, with paid training and support, that support is more widely available and will continue to be avaailable, whereas open source support and training is primarily community-based and uncompensated. But my experience has been that finding forums, how-to’s and code samples for PHP, Python and Rails has always been far easier than finding the equivalent for ASP and C#.  In the open surce world, all code is always available; in the MS world, you either buy it or you pay someone to teach you.

Is it Easier?

The bar for programming on Microsoft’s platform is high. To create a basic web application on the Microsoft platform, or to extend an existing application that supports their web programming standards, you, at a minimum, need to know XML; a scripting language such as Visual Basic or C#; and Active Server Pages (ASP).  Modern scripting languages like Ruby on Rails and PHP are high level and relatively easy to pick up; RAILS, in particular, supports a rapid application develoment model that can have a functional application built in minutes.  These languages support REST, a simple (albeit less secure) way of transmitting data in web applications.  Microsoft depends on SOAP, a more formal and complex method. A good piece on REST versus SOAP links here.

Is it Standardized?

Well, this is where I really have the problem.  MS controls their programming languages and environments.  If you develop MS software, you do it in MS Visual Studio, and you likely code in a C variant or Visual Basic, using MS compilers.  Your database is MS SQL Server. Visual Studio and SQL Server are powerful, mature products – I wouldn’t knock them.  But Microsoft has been blazing through a succession of programming standards that a cheetah couldn’t keep up with over the years, revamping their languages rapidly, changing the recommended methods of connecting to data; And generally making the job of keeping up with their expectations unattainable. So while their platform uses standardized code and libraries in order for developers to produe applications with standardized interfaces, the development tools themselves are going through constant, dramatic revisions, far more disruptive ones than the more steady, well-roadmapped enhancing of the open source competition.

Mixed Motives

The drivers for this rapid change are completely mixed up with their marketing goals.  For example, MS jumped on the web bandwagon years ago with a product called Frontpage.  Frontpage was a somewhat simplistic GUI tool for creating web pages, and it was somewhat infamous for generating web sites that were remarkably uniform in nature.  It was eclipsed completely by what is now Adobe’s Dreamweaver.  If you try and buy Frontpage today, you’ll have a hard time finding it, but it didn’t go away, it was simply revised and rebranded. Frontpage is now called “Sharepoint Designer“.  It’s a product that Microsoft recommends that Sharepoint administrators and developers use to modify the Sharepoint interface.  Mind you, most of your basic Sharepoint modifications can be made from within Sharepoint, and anything advanced can and should be done in Visual Studio.  There’s no reason to use this product, as most of what it does is easier to do elsewhere.

So it comes down to time, money and risk.  The MS route is more complex and more expensive than the open source alternatives.   The support and training is certified and industrialized. All of this costs money – the tools, the support, the code samples, and the developers, who generally make $80-150k a year.  The platform development is driven by the market, which leads to a question about it’s sustainability in volatile times for the company.  As concluded in part one, Microsoft knows that the bulk of their products will be obsolesced by a move to Software as a Service.  The move from O S-based application development to web development has been rocky, and it’s not close to finished.

Look for part 3 sometime next week, where I’ll tie this all up.