Monthly Archives: December 2009

Won’t You Let me Take You On A Sea Change?

This post was originally published on the Idealware Blog in December of 2009.

seachange.png

Last week, I reported that Nonprofit assessors like Charity Navigator and Guidestar will be moving to a model of judging effectiveness (as opposed to thriftiness). The title of my post drew some criticism. People far more knowledgeable than I am on these topics questioned my description of this as a “sea change”, and I certainly get their point.  Sure, the intention to do a fair job of judging Nonprofits is sincere; but the task is daunting.  As with many such efforts, we might well wind up with something that isn’t a sea change at all, but, rather, a modified version of what we have today that includes some info about mission effectiveness, but still boils down to a financial assessment.

Why would this happen? Simple. Because metrics are numbers: ratios, averages, totals. It’s easy to make metrics from financial data.  It’s very difficult to make them out of less quantifiable things, such as measuring how successfully one organization changed the world; protected the planet; or stopped the spread of a deadly disease.

I used to work for an org whose mission was to end poverty in the San Francisco Bay Area. And, sure enough, at the time, poverty was becoming far less prevalent in San Francisco. So could we be judged as successful?  Could we grab the 2005 versus 2000 poverty statistics and claim the advances as our outcomes? Of course not. The reduction in poverty had far more to do with gentrification during the dotcom and real estate booms than our efforts.  Poverty wasn’t reduced at all; it was just displaced. And our mission wasn’t to move all of the urban poor to the suburbs; it was to bring them out of poverty.

So the announcement that our ratings will now factor in mission effectiveness and outcomes could herald something worse than we have today. The dangerous scenario goes like this:

  • Charity Navigator, Guidestar, et al, determine what additional info they need to request from nonprofits in order to measure outcomes.
  • They make that a requirement; nonprofits now have to jump through those hoops.
  • The data they collect is far too generalized and subjective to mean much; they draw conclusions anyway, based more on how easy it is to call something a metric than how accurate or valuable that metric is.
  • NPOs now have more reporting requirements and no better representation.

So, my amended title: “We Need A Sea Change In The Way That Our Organizations Are Assessed”.

I’m harping on this topic because I consider it a call to action; a chance to make sure that this self-assessment by the assessors is an opportunity for us, not a threat. We have to get the right people at the table to develop standardized outcome measurements that the assessing organizations can use.  They can’t develop these by themselves. And we need to use our influence in the nonprofit software development community to make sure that NPOs have software that can generate these reports.

The good news? Holly Ross of NTEN got right back to me with some ideas on how to get both of these actions going.  That’s a powerful start. We’ll need the whole community in on this.

Get Ready For A Sea Change In Nonprofit Assessment Metrics

This post was originally published on the Idealware Blog in December of 2009.

watchdogs.png

Last week, GuideStar, Charity Navigator, and three other nonprofit assessment and reporting organizations made a huge announcement: the metrics that they track are about to change.  Instead of scoring organizations on an “overhead bad!” scale, they will scrap the traditional metrics and replace them with ones that measure an organization’s effectiveness.

  • The new metrics will assess:
  • Financial health and sustainability;
  • Accountability, governance and transparency; and
  • Outcomes.

This is very good news. That overhead metric has hamstrung serious efforts to do bold things and have higher impact. An assessment that is based solely on annualized budgetary efficiency precludes many options to make long-term investments in major strategies.  For most nonprofits, taking a year to staff up and prepare for a major initiative would generate a poor Charity Navigator score. A poor score that is prominently displayed to potential donors.

Assuming that these new metrics will be more tolerant of varying operational approaches and philosophies, justified by the outcomes, this will give organizations a chance to be recognized for their work, as opposed to their cost-cutting talents.  But it puts a burden on those same organizations to effectively represent that work.  I’ve blogged before (and will blog again) on our need to improve our outcome reporting and benchmark with our peers.  Now, there’s a very real danger that neglecting to represent your success stories with proper data will threaten your ability to muster financial support.  You don’t want to be great at what you do, but have no way to show it.

More to the point, the metrics that value social organizational effectiveness need to be developed by a broad community, not a small group or segment of that community. The move by Charity Navigator and their peers is bold, but it’s also complicated.  Nonprofit effectiveness is a subjective thing. When I worked for a workforce development agency, we had big questions about whether our mission was served by placing a client in a job, or if that wasn’t an outcome as much as an output, and the real metric was tied to the individual’s long-term sustainability and recovery from the conditions that had put them in poverty.

Certainly, a donor, a watchdog, a funder a, nonprofit executive and a nonprofit client are all going to value the work of a nonprofit differently. Whose interests will be represented in these valuations?

So here’s what’s clear to me:

– Developing standardized metrics, with broad input from the entire community, will benefit everyone.

– Determining what those metrics are and should be will require improvements in data management and reporting systems. It’s a bit of a chicken and egg problem, as collecting the data wis a precedent to determining how to assess it, but standardizing the data will assist in developing the data systems.

– We have to share our outcomes and compare them in order to develop actual standards.  And there are real opportunities available to us if we do compare our methodologies and results.

This isn’t easy. This will require that NPO’s who have have never had the wherewith-all to invest in technology systems to assess performance do so.  But, I maintain, if the world is going to start rating your effectiveness on more than the 990, that’s a threat that you need to turn into an opportunity.  You can’t afford not to.

And I look to my nptech community, including Idealware, NTEN, Techsoup, Aspiration and many others — the associations, formal, informal, incorporated or not, who advocate for and support technology in the nonprofit sector — to lead this effort.  We have the data systems expertise and the aligned missions to lead the project of defining shared outcome metrics.  We’re looking into having initial sessions on this topic at the 2010 Nonprofit Technology Conference.

As the world starts holding nonprofits up to higher standards, we need a common language that describes those standards.  It hasn’t been written yet.  Without it, we’ll escape the limited, Form 990 assessments to something that might equally fail to reflect our best efforts and outcomes.

The Cults That Get Things Done

This post was originally published on the Idealware Blog in December of 2009.

Here at Idealware, an organization that’s all about nonprofit-focused software, we understand that the success or failure of a software project often has far more to do with the implementation than the application. So, in addition to discussing software, we talk a lot about project management. To many of us, it seems like the only thing worse than devoting our scant resources to the task of building and maintaining a complex project plan is living with the result of a project that wasn’t planned. While I’m a big a fan as the next guy of PMP-certified, MS Project Ninja masters, and will argue that you need one if your project is to build a new campus or a bridge, I think there are alternate methodologies that can cover us as we roll out our CRMs and web sites, even though I know that these projects that will fail expensively without proper oversight.

The traditional project planning method starts with a Project Manager, who plays a role that fluctuates between implementation guru, data entry clerk and your nagging Mom when you’re late for school.  The PM, as we’ll call her or him, gathers all of the projected dates, people, budget, and materials, then builds the house of cards that we call the plan.  The plan will detail how the HR Director will spend 15% of her time on a series of scheduled tasks that, if they slip, will impact the Marketing Coordinator and the Database Manager’s tasks and timelines.  So the PM has to be able to quickly, intelligently, rewrite the plan when the HR Director is pulled away for a personnel matter, skewering those assumptions.

My take is that this methodology doesn’t work in environments like ours, where reduced overhead, high turnover and unanticipated priorities are the norm.  We need a less granular methodology; one that will bend easily with our flexible work conditions.  Mind you, when you give up the detailed plan, you give up the certainty that every “i” will be dotted, every “t” crossed, and every outcome accomplished on schedule.  But it’s possible to still keep sight of the important things while sacrificing some of the structural integrity.

First, keep what is critical: clear goals, communication, engagement and feedback.  The biggest risk in any project no matter how well planned, is that you’ll end up with something that has little relation to what you were trying to get.  You need clearly understood goals, shared by all internal and external parties. Each step taken must factor in those goals and be made in light of them.  All parties who have a stake in the project should have a role and a voice in the plan, from the CEO to the data entry clerk.  And everyone’s opinion matters.

Read up on agile project management, a collaborative approach that is more focused on the outcomes than  the steps and timeline to get there.  Offload the project management by focusing on expectation management.  The clearer the participants are about their roles and accountability for their contributions, the less they need to be managed.  Take a look at the Cult of Done (their manifesto is at the top of this article).  Sound insane? Maybe.  More insane than spending thousands of dollars and hours on an over-planned project that never yields results? For some perspective, read The Mythical Man Month (or, at least, this Wikipedia article on it), a book that clearly illustrates how the best laid plans can go horribly wrong.

Finally, my advocacy for less stringent forms of project management should not be read as permission to do it haphazardly.  Engagement in and attention to the project can’t be minimized.  I’m suggesting that we can take a more creative, less traditional approach in environments where the traditional approach might be a bad fit, and for projects that don’t require it.  There are a lot of judgment calls involved, and the real challenge, as always, is keeping your eye on the goals and the team accountable for delivering them.

Wave Impressions

This post originally appeared on the Idealware Blog in November of 2009.

A few months ago, I blogged a bit about Google Wave, and how it might live up to the hype of being the successor to email.  Now that I’ve had a month or so to play with it, I wanted to share my initial reactions.  Short story: Google Wave is an odd duck, that takes getting used to. As it is today, it is not that revolutionary — in fact, it’s kind of redundant. The jury is still out.

Awkwardness

To put Wave in perspective, I clearly remember my first exposure to email.  I bought my first computer in 1987: a Compaq “portable”. The thing weighed about 60 pounds, sported a tiny green on black screen, and had two 5 and 1/4 inch floppy drives for applications and storage).  Along with the PC, I got a 1200 BPS modem, which allowed me o dial up local bulletin boards.  And, as I poked around, I discovered the 1987 version of email: the line editor.

On those early BBSes, emails were sent by typing one line (80 characters, max) of text and hitting “enter”.  Once “enter” was pressed, that line was sent to the BBS.  No correcting typos, no rewriting the sentence.  It was a lot like early typewriters, before they added the ability to strike out previously submitted text.

But, regardless of the primitive editing capabilities, email was a revelation.  It was a new medium; a form of communication that, while far more awkward than telephone communications, was much more immediate than postal mail.  And it wasn’t long before more sophisticated interfaces and editors made their way to the bulletin boards.

Google Wave is also, at this point, awkward. To use it, you have to be somewhat self-confident right from the start, as others are potentially watching every letter that you type.  And while it’s clear that the ability to co-edit and converse about a document in the same place is powerful, it’s messy.  Even if you get over the sprawling nature of the conversations, which are only minimally better than  what you would get with ten to twenty-five people all conversing in one Word document, the lack of navigational tools within each wave is a real weakness.

Redundant?

I’m particularly aware of these faults because I just installed and began using Confluence, a sophisticated, enterprise Wiki (free for nonprofits) at my organization. While we’ve been told that Wave is the successor to email, Google Docs and, possibly, Sharepoint, I have to say that Confluence does pretty much all of those things and is far more capable.  All wikis, at their heart, offer collaborative editing, but the good ones also allow for conversations, plug-ins and automation, just as Google Wave promises.  But with a wiki, the canvas is large enough and the tools are there to organize and manage the work and conversation.  With Wave, it’s awfully cramped, and somewhat primitive in comparison.

Too early to tell?

Of course, we’re looking at a preview.  The two things that possibly differentiate Wave from a solid wiki are the “inbox” metaphor and the automation capabilities. Waves can come to you, like email, and anyone who has tried to move a group from an email list to a web forum knows how powerful that can be. And Wave’s real potential is in how the “bots”, server-side components that can interact with the people communicating and collaborating, will integrate the development and conversation with existing data sources.  It’s still hard to see all of that in this nascent stage.  Until then, it’s a bit chicken and egg.

Wave starting points

There are lots of good Wave resources popping up, but the best, hands down, is Gina Trapini’s Complete Guide, available online for free and in book form soon. Gina’s blog is a must read for people who find the types of things I write about interesting.

Once you’re on wave, you’ll want to find Waves to join, and exactly how you do that is anything but obvious.  the trick is to search for a term “such as “nonprofit” or “fundraising” and add the phrase “with:public”. A good nonprofit wave to start with is titled, appropriately, “The Nonprofit Technology Wave”.

If you haven’t gotten a Wave invite and want to, now is the time to query your Twitter and Facebook friends, because invites are being offered and we’ve passed the initial “gimme” stage.  In fact, I have ten or more to share (I’m peterscampbell on most social networks and at Google’s email service).

The Idealware Research Fund

Idealware LogoFans of this blog are likely fans of the other site I blog at, Idealware.  So you already know that Idealware offers a rich, valuable service to the nonprofit community with it’s reports, webinars, trainings and programs that help nonprofits make smart decisions about software.  One of the big challenges that Idealware faces is to maintain a high level of independence for their reporting.  If your goal is to be the Consumer Reports of nonprofit software, and you need funding in order to do that, you also need to be very careful about how you receive that funding, in order to make sure that no bias creeps through to your reporting. Laura Quinn, Idealware’s founder and primary force, has come up with a few clever models for eliminating such bias, but today she unleashed a more sustainable approach to funding that will greatly simplify the process.

The Idealware Research Fund will provide basic, pooled funding for the great work that Idealware does, keeping it independent, unbiased, and resourced to provide the critical insight that smooths the stormy waters when we embark on big and small technology projects. The fund was kicked off today with a goal of raising $15,000 by December 31st.  Please let people know about Idealware’s work and this opportunity to support them, and consider supporting them yourself, if you can afford to.

Note that my self-interest is minimal here.  I’m an unpaid, volunteer blogger at Idealware and will remain such.  I have been paid (via Techsoup) for a couple of articles I’ve written.  But my support and pitch here is based solely on my belief that Idealware does great, effective work and needs our support.

Twitiquette

This post first appeared on the Idealware Blog in November of 2009.

Social networks provide nonprofits with great opportunities to raise awareness, just as they offer individuals more opportunities to be diagnosed with information overload syndrome. To my mind, the value of tools like Twitter and Facebook are not only that they increase my ability to communicate with people, but also that they replace communication models that are less efficient. Prior to social networks, we had Email, phones, Fax and Instant Messaging (IM). Each of these were ideal for one to one communication, and suitable for group messaging, but poor at broadcasting. With Twitter and Facebook, we have broader recipient bases for our messaging. Accordingly, there’s also an assumption that we are casual listeners. With so much information hitting those streams, it would be unrealistic to expect anyone to listen 24/7.

Geek and Poke cartoon by Oliver Widder

twittercartoon.jpg

Twitter offers, in addition to the casual stream, a person-to-person option called direct messaging. This is handy when you want to share information with a twitter friend that you might not want to broadcast, such as your email address, or a link to a map to your house. You can only direct message someone who is following you — otherwise, it would be far too easy to abuse. Direct messages have more more in common with old-fashioned IM and EMail than Twitter posts. You can’t direct message multiple recipients, and most of us receive direct messages in our email inboxes and/or via SMS, to insure that we don’t miss them.

So I took note when a friend on a popular forum posted that his organization was launching a big campaign, and he was looking for a tool that would let him send a direct messages to every one of his followers. This, to me, seems like a bad idea. While I follow a lot of people and organizations on Twitter, I subscribe by email to far fewer mailing lists, limiting that personal contact to the ones that I am most interested in and/or able to support. I follow about 250 organizations on Twitter; I have no care to receive all of their campaign emails. But i trust that, if they are doing something exciting or significant, I’ll hear about it. My friends will post a link on Facebook. They’ll also retweet it. The power of social media is — or, at least, should be — that the interesting and important information gets voted up, and highlighted, based on how it’s valued by the recipients, not the sender.

Social networks differ primarily from email and fax in that they are socially-driven messaging. The priority of any particular message can be set by each persons community that they tune into. My friend thinks his campaign is the most important thing coming down the pike, and that he should be able to transcend the casual nature of Twitter conversation in order to let me know about it. And, of course, I think that every campaign that my org trumpets is more important than his. But I think that proper campaign etiquette and strategy is to blast information on the mediums that support that, where your constituents sign up to be individually alerted. If you want to spread the word on Twitter or Facebook, focus on the message, not the media, and let the community carry it for you, if they agree that it’s worthy.