Tag Archives: metrics

It’s Time To Revamp The NTEN Staffing Survey

cover_techstaffingreport_2014_smallNTEN‘s annual Nonprofit IT Staffing survey is out, you can go here to download it.  It’s free! As with prior years, the report structures it’s findings around the self-reported technology adoption level of the participants, as follows:

  • Stuggling orgs have failing technology and no money to invest in getting it stabilized. They have little or no IT staff.
  • Functioning orgs have a network in place and running, but use tech simply as infrastructure, with little or no strategic input.
  • Operating nonprofits have tech and policies for it’s use in place, and they gather input from tech staff and consultants before making technology purchasing and planning decisions.
  • Leading NPOs integrate technology planning with general strategic planning and are innovative in their use of tech.

The key metrics discussed in the report are the IT staff to general staff ratio and the IT budget as percentage of total budget.  The IT->general staff metric is one to thirty, which matches all of the best information I have on this metric at nonprofits, which I’ve pulled from CIO4Good and NetHope surveys.

On budgets, an average of 3% of budget to IT is also normal for NPOs.  But what’s disturbing in the report is that the ratio was higher for smaller orgs and lower for larger, who averaged 1.6% or 1.7%. In small orgs, what that’s saying is that computers, as infrastructure, take up a high percentage of the slim budget.  But it says that larger orgs are under-funding tech.  Per Gartner, the cross-industry average is 3.3% of budget.  For professional services, healthcare and education — industries that  are somewhat analogous to nonprofits — it’s over 4%.  The reasons why we under-spend are well-known and better ranted about by Dan Palotta than myself, but it’s obvious that, in 2014, we are undermining our efforts if we are spending less than half of what a for profit would on technology.

What excites me most about this year’s report is what is not in it: a salary chart. All of the prior reports have averaged out the IT salary info reported and presented it in a chart, usually by region.  But the survey doesn’t collect sufficiently detailed or substantial salary info, so the charts have traditionally suffered from under-reporting and averaging that results in misleading numbers.  I was spitting mad last year when the report listed a Northeastern Sysadmin salary at $50k.  Market is $80, and the odds that a nonprofit will get somebody talented and committed for 63% of market are slim.  Here’s my full take on the cost of dramatically underpaying nonprofit staff. NTEN shouldn’t be publishing salary info that technophobic CEOs will use as evidence of market unless the data is truly representative.

I would love it if NTEN would take this survey a little deeper and try and use it to highlight the ramifications of our IT staffing and budgeting choices.  Using the stumble, crawl, walk, run scale that they’ve established, we could gleam some real insight by checking other statistics against those buckets. Here are some metrics I’d like to see:

  • Average days each year that key IT staff positions are vacant. This would speak to one of the key dangers in underpaying IT staff.
  • Percentage of IT budget for consulting. Do leading orgs spend more or less than trailing? How much bang do we get for that buck?
  • In-house IT Staff vs outsourced IT management.  It would be interesting to see where on the struggling to leading scale NPOs that outsource IT fall.
  • Percentage of credentialed vs “accidental” techs. I want some data to back up my claim that accidental techies are often better for NPOs than people with lots of IT experience.
  • Who does the lead IT Person report to? How many leading orgs have IT reporting to Finance versus the CEO?

What type of IT staffing metrics would help you make good decisions about how to run your nonprofit? What would help you make a good case for salaries, staffing or external resources to your boss? I want a report from NTEN that does more than just tells me the state of nonprofit IT — that’s old, sad news.  I want one that gives me data that I can use to improve it.

 

Why I Hate Help Desk Metrics

image
Photo: birgerking

Tech support, as many of you know, can be a grueling job.  There are a huge variety of problems, from frozen screens to document formatting issues to malware infestations to video display madness.  There are days when you are swamped with tickets.  And there are customers that continually broaden the scale from tech-averse to think-they-know-it-all. I’ve done tech support and I’ve managed tech support for most of my career, and providing good support isn’t the biggest challenge.  Rather, it’s keeping the tech support staff from going over the edge.

In our nptech circles, it would be natural to assume that having good metrics on everything help desk would assist me in solving these problems. Good metrics might inform me regarding the proper staffing levels, the types of expertise needed, the gaps in our application suites, all that good stuff that can support my budgeting and strategy. But once I start collecting them I open myself up to that imminent threat that someone else in management (my boss, the board, or whomever) might want to see the metric, too. They want to see are metrics like:

  • Average tickets and calls per day
  • Number of open tickets
  • Average time to resolve a ticket

Their idea is that these numbers will tell them how productive the tech support staff are, how efficient, and how successful they are at resolving problems.

Every one of these is a unreliable metric.  Alone or together, they don’t tell a meaningful story. Let’s take them one by one:

Average daily tickets: This is a number that is allegedly meaningful as it rises and falls.  If we have 30 tickets a day in January, and 50 a day in February, it means something.  But what?  Does it mean that IT is being more productive?  Does it imply that there are more issues popping up? Is it because more people are feeling comfortable about calling the help desk?  If we drop to 15 in February, what does that mean?  That IT has stabilized a lot of problems, or that the users have figured out that others in the org are more helpful than IT?

Number of open tickets: The standard assumption is that fewer is better.  And while that is generally true, it can be deceiving, because the nature of tickets varies dramatically.  Some require budget approvals and other time-consuming delays.  An assumption that tickets are open because the technician hasn’t gotten around to resolving them is often wrong.

Average time to resolve a ticket:  This one is deadly. Because it is commonly used as a performance metric, and that’s based on the assumption that the quicker all tickets are closed, the better service IT is providing.  The common scenario I’ve encountered where this metric is shared with management is that the tech support staff grow so pressured to close tickets that they regularly close them before the issue is truly resolved.  It creates tension with staff, as the real power of a help desk ticketing system is in the communication that it enables, not the communication that it cuts off when staff are not geared toward taking a communicative approach to issue resolution.

Worse, it takes away the technician’s ability to prioritize.  Every ticket must be closed quickly in order to look efficient, so every ticket is a priority.  But, in fact, many tickets aren’t high priority at all.  People often want to report computer problems that they aren’t in a hurry to get resolved.  When every ticket is treated like a fire to be put out, staff, naturally, start getting resistant to shouting “fire”, and stop reporting that annoying pop-up error that they get every time they log in.  They start living with all of the little things that they have inconvenient but bearable workarounds for, and as these pile up, they grow more and more annoyed with their computers — and tech support.

So what might useful metrics to assess the effectiveness of tech support entail? Here’s what I look for:

  • Evidence that the techs are prioritizing tickets correctly. They’re jumping when work stoppage issues are reported and taking their time on very low priority matters.
  • Tickets in the system are well-documented. We’re capturing complex solutions and noting issues that could be reduced with training, fine-tuning or a software upgrade.
  • Shirts are tucked in, hair isn’t mussed, nobody is on the verge of tears. High stress on support techs is usually plain to see.

The type of person that gravitates to a tech support job is a person that likes to help. There are egos involved, and an accompanying love of solving puzzles, but the job satisfaction comes from solving problems, and that’s exactly what we want our support staff to do. Creating an environment where the pressure is higher to close tickets than it is to resolve them is a lose-lose scenario for everyone.

The Ethnic Check

Census_2001Yesterday I received a letter from the State of California alerting me that my Census form is due next week and that I should be sure to fill it out and return it, as is decidedly my intention. That form will include the page that drives many Americans crazy — the one that offers you a bunch of ethnic backgrounds that you can identify yourself on. As my spouse of African-Cherokee-Jamaican-German and who knows what else decent says, this is not a multiple choice question for many of us. Personally, I always check the “white” box, which is not lying, although I always have a nagging doubt that the Semitic parts of my genetic makeup aren’t fairly represented by that choice.

Today, skimming through my news feed, I starred this article by Michelle Malkin, passed on by Google Reader’s “Cool” feed, and I just found time to read it. The gist of the article is that Census filler-outers should refrain from allowing the government to peg us by ethnicity, instead choosing “Other” and filling in the comment squares with “American”. Take that, Gubmint statisticians!

Now, this is interesting, because while Ms. Malkin proudly describes herself as a Fox News Commentator, I don’t think this question lands on a liberal/conservative scale. Discomfort with being pegged by race straddles all ideological outposts, as it should. But data is data, and the ethnic makeup of our country by geographic area is a powerful set of data. If we don’t know that a neighborhood is primarily Asian, White, Black or Hispanic, we don’t know if the schools are largely segregated. We don’t know if the auto insurance rates are being assessed with a racial bias. We don’t know if elected officials are representative of the districts they serve. And these are all very important things to know.

It might seem that, by eschewing all data about race, we can consider ourselves above racism. But we can board our windows and doors and dream that the world outside is made of candy, too. It won’t make the world any sweeter. If we don’t have any facts about the ethnic makeup and the conditions of people in this country, then we can’t discuss racial justice and equality in any meaningful fashion. We might hate to take something as personal as the genetic, geographic path that brought us to this country and made us the unique individuals that we are and dissect it, analyze it, generalize about it and draw broad conclusions. It is uncomfortable and, in a way, demeaning. But it’s not as uncomfortable and demeaning as being broadly discriminated against. And without evidence of abuse, and of progress, we can’t end discrimination. We can only board up the windows that display it.

So, I’m not going to take Ms. Malkin’s advice on this one, and I’m going to urge my multi-racial wife and kid to be as honest as they can with the choices provided to them. Because we want the government to make decisions based on facts and data, not idealizations, even if it means being a little blaze about who we really are.

Won’t You Let me Take You On A Sea Change?

This post was originally published on the Idealware Blog in December of 2009.

seachange.png

Last week, I reported that Nonprofit assessors like Charity Navigator and Guidestar will be moving to a model of judging effectiveness (as opposed to thriftiness). The title of my post drew some criticism. People far more knowledgeable than I am on these topics questioned my description of this as a “sea change”, and I certainly get their point.  Sure, the intention to do a fair job of judging Nonprofits is sincere; but the task is daunting.  As with many such efforts, we might well wind up with something that isn’t a sea change at all, but, rather, a modified version of what we have today that includes some info about mission effectiveness, but still boils down to a financial assessment.

Why would this happen? Simple. Because metrics are numbers: ratios, averages, totals. It’s easy to make metrics from financial data.  It’s very difficult to make them out of less quantifiable things, such as measuring how successfully one organization changed the world; protected the planet; or stopped the spread of a deadly disease.

I used to work for an org whose mission was to end poverty in the San Francisco Bay Area. And, sure enough, at the time, poverty was becoming far less prevalent in San Francisco. So could we be judged as successful?  Could we grab the 2005 versus 2000 poverty statistics and claim the advances as our outcomes? Of course not. The reduction in poverty had far more to do with gentrification during the dotcom and real estate booms than our efforts.  Poverty wasn’t reduced at all; it was just displaced. And our mission wasn’t to move all of the urban poor to the suburbs; it was to bring them out of poverty.

So the announcement that our ratings will now factor in mission effectiveness and outcomes could herald something worse than we have today. The dangerous scenario goes like this:

  • Charity Navigator, Guidestar, et al, determine what additional info they need to request from nonprofits in order to measure outcomes.
  • They make that a requirement; nonprofits now have to jump through those hoops.
  • The data they collect is far too generalized and subjective to mean much; they draw conclusions anyway, based more on how easy it is to call something a metric than how accurate or valuable that metric is.
  • NPOs now have more reporting requirements and no better representation.

So, my amended title: “We Need A Sea Change In The Way That Our Organizations Are Assessed”.

I’m harping on this topic because I consider it a call to action; a chance to make sure that this self-assessment by the assessors is an opportunity for us, not a threat. We have to get the right people at the table to develop standardized outcome measurements that the assessing organizations can use.  They can’t develop these by themselves. And we need to use our influence in the nonprofit software development community to make sure that NPOs have software that can generate these reports.

The good news? Holly Ross of NTEN got right back to me with some ideas on how to get both of these actions going.  That’s a powerful start. We’ll need the whole community in on this.

Get Ready For A Sea Change In Nonprofit Assessment Metrics

This post was originally published on the Idealware Blog in December of 2009.

watchdogs.png

Last week, GuideStar, Charity Navigator, and three other nonprofit assessment and reporting organizations made a huge announcement: the metrics that they track are about to change.  Instead of scoring organizations on an “overhead bad!” scale, they will scrap the traditional metrics and replace them with ones that measure an organization’s effectiveness.

  • The new metrics will assess:
  • Financial health and sustainability;
  • Accountability, governance and transparency; and
  • Outcomes.

This is very good news. That overhead metric has hamstrung serious efforts to do bold things and have higher impact. An assessment that is based solely on annualized budgetary efficiency precludes many options to make long-term investments in major strategies.  For most nonprofits, taking a year to staff up and prepare for a major initiative would generate a poor Charity Navigator score. A poor score that is prominently displayed to potential donors.

Assuming that these new metrics will be more tolerant of varying operational approaches and philosophies, justified by the outcomes, this will give organizations a chance to be recognized for their work, as opposed to their cost-cutting talents.  But it puts a burden on those same organizations to effectively represent that work.  I’ve blogged before (and will blog again) on our need to improve our outcome reporting and benchmark with our peers.  Now, there’s a very real danger that neglecting to represent your success stories with proper data will threaten your ability to muster financial support.  You don’t want to be great at what you do, but have no way to show it.

More to the point, the metrics that value social organizational effectiveness need to be developed by a broad community, not a small group or segment of that community. The move by Charity Navigator and their peers is bold, but it’s also complicated.  Nonprofit effectiveness is a subjective thing. When I worked for a workforce development agency, we had big questions about whether our mission was served by placing a client in a job, or if that wasn’t an outcome as much as an output, and the real metric was tied to the individual’s long-term sustainability and recovery from the conditions that had put them in poverty.

Certainly, a donor, a watchdog, a funder a, nonprofit executive and a nonprofit client are all going to value the work of a nonprofit differently. Whose interests will be represented in these valuations?

So here’s what’s clear to me:

– Developing standardized metrics, with broad input from the entire community, will benefit everyone.

– Determining what those metrics are and should be will require improvements in data management and reporting systems. It’s a bit of a chicken and egg problem, as collecting the data wis a precedent to determining how to assess it, but standardizing the data will assist in developing the data systems.

– We have to share our outcomes and compare them in order to develop actual standards.  And there are real opportunities available to us if we do compare our methodologies and results.

This isn’t easy. This will require that NPO’s who have have never had the wherewith-all to invest in technology systems to assess performance do so.  But, I maintain, if the world is going to start rating your effectiveness on more than the 990, that’s a threat that you need to turn into an opportunity.  You can’t afford not to.

And I look to my nptech community, including Idealware, NTEN, Techsoup, Aspiration and many others — the associations, formal, informal, incorporated or not, who advocate for and support technology in the nonprofit sector — to lead this effort.  We have the data systems expertise and the aligned missions to lead the project of defining shared outcome metrics.  We’re looking into having initial sessions on this topic at the 2010 Nonprofit Technology Conference.

As the world starts holding nonprofits up to higher standards, we need a common language that describes those standards.  It hasn’t been written yet.  Without it, we’ll escape the limited, Form 990 assessments to something that might equally fail to reflect our best efforts and outcomes.

Paving the Road – a Shared Outcomes Success Story

This post was originally published on the Idealware blog in July of 2009.

I recently wrote about the potential for shared outcome reporting among nonprofits and the formidable challenges to getting there. This topic hits a chord for those of us who believe strongly that proper collection, sharing and analysis of the data that represents our work can significantly improve our performance and impact.

Shared outcome reporting allows an organization to both benchmark their effectiveness with peers, and learn from each others’ successful and failed strategies. If your most effective method of analyzing effectiveness is year to year comparisons, you’re only measuring a portion of the elephant. You don’t practice your work in a vacuum; why analyze it in one?

But, as I wrote, for many, the investment in sharing outcomes is a hard sell. Getting there requires committing scarce time, labor and resources to the development of the metrics, collection of data, and input; trust and competence in the technology; and partnering with our peers, who, in many cases, are also our competitors. And, in conditions where just keeping up with the established outcome reporting required for grant compliance is one of our greater challenges, envisioning diving into broader data collection, management and integration projects looks very hard to justify.

So let’s take a broader look this time at the justifications, rather than the challenges.

Success Measures is a social enterprise in DC that provides tools and consulting to organizations that want to evaluate their programs and services and use the resulting data. From their website:

Success Measures®, a social enterprise at NeighborWorks® America is an innovative participatory outcome evaluation approach that engages community stakeholders in the evaluation process and equips them with the tools they need to document outcomes, measure impact and inform change.

To accomplish this, in 2000, they set up an online repository of surveying and evaluation tools that can be customized by the participant to meet their needs. After determining what it is that they want to measure, participants work with their constituencies to gather baseline data. Acting on that data, they can refine their programs and address needs, then, a year or two later, use the same set of tools to re-survey and learn from the comparative data. Success Measures supplements the tools collection with training, coaching, and consulting to insure that their participants are fully capable of benefiting from their services. And, with permission, they provide cross-client metrics; the shared outcomes reporting that we’re talking about.

The tools work on sets of indicators, and they provide pre-defined sets of indicators as well as allowing for custom items. The existing sets cover common areas: Affordable housing; community building; economic development; race, class and community. Sets currently under development include green building/sustainable communities; community stabilization; measuring outcomes of asset programs; and measuring value of intermediary services.

Note that this resources nonprofits on both sides of the equation — they not only provide the shared metrics and accompanying insight into effective strategies for organizations that do what you do; they also provide the tools. This addresses one of the primary challenges, which is that most nonprofits don’t have the skills and staff required simply to create the surveying tools.

Once I understood what Success Measures was offering, my big question was, “how did you get any clients?” They had good answers. They actually engage more with the funders than the nonprofits, selling the foundations on the value of the data, and then sending them to their grantees with the recommendation. This does two important things:

  • First, it provides a clear incentive to the nonprofits. The funders aren’t just saying “prove that you’re effective”; they’re saying “here’s a way that you can quantify your success. The funding will follow.
  • Second, it provides a standardized reporting structure — with pre-developed tools and support — to the nonprofits. In my experience, having worked for an organization with multiple city, state and federal grants and funded programs, keeping up with the diverse requirements of each funding agency was an administrative nightmare.

So, if the value of comparative, cross-sector metrics isn’t reason enough to justify it, maybe the value of pre-built data collection tools is. Or, maybe the value of standardized reporting for multiple funding sources has a clear cost benefit attached. Or, maybe you’d appreciate a relationship with your funders that truly rewards you with grants based on your effectiveness. Success Measures has a model for all of the above.

The Road to Shared Outcomes

This post originally appeared on the Idealware Blog in May of 2009.

At the recent Nonprofit Technology Conference, I attended a somewhat misleadingly titled session called “Cloud Computing: More than just IT plumbing in the sky“. The cloud computing issues discussed were nothing like the things we blog about here (see Michelle’s and my recent “SaaS Smackdown” posts). Instead, this session was really a dive into the challenges and benefits of publishing aggregated nonprofit metrics. Steve Wright of the Salesforce Foundation led the panel, along with Lucy Bernholz and Lalitha Vaidyanathan. The session was video-recorded; you can watch it here.

Steve, Lucy and Lalithia painted a pretty visionary picture of what it would be like if all nonprofits standardized and aggregated their outcome reporting on the web. Lalithia had a case study that hit on the key levels of engagement: shared measurement systems; comparative performance measurement and a baked in learning process. Steve made it clear that this is an iterative process that changes as it goes — we learn from each iteration and measure more effectively, or more appropriately for the climate, each time.

I’m blogging about this because I’m with them — this is an important topic, and one that gets lost amidst all of the social media and web site metrics focus in our nptech community. We’re big on measuring donations, engagement, and the effectiveness of our outreach channels, and I think that’s largely because there are ample tools and extra-community engagement with these metrics — every retailer wants to measure the effectiveness of their advertising and their product campaigns as well. Google has a whole suite of analytics available, as do other manufacturers. But outcomes measurement is more particular to our sector, and the tools live primarily in the reporting functionality of our case and client management systems. They aren’t nearly as ubiquitous as the web/marketing analysis tools, and they aren’t, for the most part, very flexible or sophisticated.

Now, I wholly subscribe to the notion that you will never get anywhere if you can’t see where you’re going, so I appreciate how Steve and crew articulated that this vision of shared outcomes is more than just a way to report to our funders; it’s also a tool that will help us learn and improve our strategies. Instead of seeing how your organization has done, and striving to improve upon your prior year’s performance, shared metrics will offer a window into other’s tactics, allowing us all to learn from each others’ successes and mistakes.

But I have to admit to being a bit overwhelmed by the obstacles standing between us and these goals. They were touched upon in the talk, but not heavily addressed.

  • Outcome management is a nightmare for many nonprofits, particularly those who rely heavily on government and foundation funding. My brief forays into shared outcome reporting were always welcomed at first, then shot down completely, the minute it became clear that joint reporting would require standardization of systems and compromise on the definitions. Our case management software was robust enough to output whatever we needed, but many of our partners were in Excel or worse. Even if they’d had good systems, they didn’t have in-house staff that knew how to program them.
  • Outcomes are seen by many nonprofit executives as competitive data. If we place ours in direct comparison with the similar NPO down the street, mightn’t we just be telling our funders that they’re backing the wrong horse?
  • The technical challenges are huge — of the NPOs that actually have systems that tally this stuff, the data standards are all over the map, and the in-house skill, as well as time and availability to produce them, is generally thin. You can’t share metrics if you don’t have the means to produce them.

A particular concern is that all metrics are fairly subjective, as can happen when the metrics produced are determined more by the funding requirements than the NPO’s own standards. When I was at SF Goodwill, our funders were primarily concerned with job placements and wages as proof of our effectiveness. But our mission wasn’t one of getting people jobs; it was one of changing lives, so the metrics that we spent the most work on gathering were only partially reflective of our success – more outputs than outcomes. Putting those up against the metrics of an org with different funding, different objectives and different reporting tools and resources isn’t exactly apples to apples.

The benefits of shared metrics that Steve and crew held up is a worthwhile dream, but, to get there, we’re going to have to do more than hold up a beacon saying “This is the way”. We’re going to have to build and pave the road, working through all of the territorial disputes and diverse data standards in our path. Funders and CEOs are going to have to get together and agree that, in order to benefit from shared reporting, we’ll have to overcome the fact that these metrics are used as fodder in the battles for limited funding. Nonprofits and the ecosystem around them are going to have to build tools and support the art of data management required. These aren’t trivial challenges.

I walked into the session thinking that we’d be talking about cloud computing; the migration of our internal servers to the internet. Instead, I enjoyed an inspiring conversation that took place, as far as I’m concerned, in the clouds. We have a lot of work to do on the ground before we can get there.

Fair Pay

A sad, but all too common problem was presented on NTEN‘s main discussion forum yesterday:

An IT Director in New York City, working for a large nonprofit (650 people, multiple locations, full IT platform), got approval from his boss to hire in a Systems Administrator (punchline here) at $40,000 annually. Understand, System Administrators rarely make less than $75k a year at similarly sized for profits. The boss pulled that number out of a salary survey, but, given the quality of it, I say he might as well have pulled it out of a hat.

Determining what’s fair — or, as we call it “market” — pay is an art in itself, and good salary surveys, like the one NTEN produces, offer far more than suggested wages – they provide context, like location, industry standards; they discuss trends, and the best ones frame the survey results in what the numbers should mean to us.

So, when I read the NTEN survey, and saw what were still ridiculously low salaries in comparison to the for-profit pay scales, I didn’t read it as “these are good numbers”. I read it as “our industry doesn’t value technology.” Literally. If our salaries are at 50-75% of the rest of the world’s, how are we going to attract long-term, talented people? And if we have a revolving door of mediocre (or, more accurately, some stellar, some miserable) sysadmins running our critical systems, how much money, productivity, and plain competence at our important work are we going to sacrifice? What’s the cost of maintaining instability in order to save bucks on payroll?

So my pitch is that we have to stop thinking that there’s a metric called nonprofit wages. There are market rates for positions, and there is a value in serving a mission. So a nonprofit salary is a market salary (what a for profit would pay), less the monetary value of being able to serve the mission.

Nonprofits can’t keep thinking that they exist in some world within a world. They complete with all businesses for talent, and, in the IT realm, for profits not only offer better compensation, they offer more toys, bigger staffs (which translates to more techies to pal around with, something a lot of my staff have missed in nonprofit), and, often, newer technology to learn and deploy. In our field, it’s all about current skills.

So I feel for my compatriot in NYC, and hope that he can muster a case for his boss, for both his and his bosses sake. If NTEN is reading, a great accompanying metric for the salary survey would be IT turnover tracking, as well as interims when key poisitions (CIO, Sysadmin) are unfilled. Info on how that impacted business objectives. We need to do more than just report on the pay – we have to document the impacts.