Tag Archives: assessment

The Future Of Technology

Jean_Dodal_Tarot_trump_01…is the name of the track that I am co-facilitating at NTEN’s Leading Change Summit. I’m a late addition, there to support Tracy Kronzak and Tanya Tarr. Unlike the popular Nonprofit Technology Conference, LCS (not to be confused with LSC, as the company I work for is commonly called, or LSC, my wife’s initials) is a smaller, more focused affair with three tracks: Impact Leadership, Digital Strategy, and The Future of Technology. The expectation is that attendees will pick a track and stick with it.  Nine hours of interactive sessions on each topic will be followed by a day spent at the Idea Accelerator, a workshop designed to jump-start each attendee’s work in their areas. I’m flattered that they asked me to help out, and excited about what we can do to help resource and energize emerging nptech leaders at this event.

The future of technology is also something that I think about often (hey, I’m paid to!) Both in terms of what’s coming, and how we (LSC and the nonprofit sector) are going to adapt to it. Here are some of the ideas that I’m bringing to LCS this fall:

  • At a tactical level, no surprise, the future is in the cloud; it’s mobile; it’s software as a service and apps, not server rooms and applications.
  • The current gap between enterprise and personal software is going to go away, and “bring your own app” is going to be the computing norm.
  • Software evaluation will look more at interoperability, mobile, and user interface than advanced functionality.  In a world where staff are more independent in their software use, with less standardization, usability will trump sophistication.  We’ll expect less of our software, but we’ll expect to use it without any training.
  • We’ll expect the same access to information and ability to work with it from every location and every device. There will still be desktop computers, and they’ll have more sophisticated software, but there will be less people using them.
  • A big step will be coming within a year or two, when mobile manufacturers solve the input problem. Today, it’s difficult to do serious content creation on mobile devices, due primarily to the clumsiness of the keyboards and, also, the small screens. They will come up with something creative to address this.
  • IT staffing requirements will change.  And they’ll change dramatically.  But here’s what won’t happen: the percentage of technology labor won’t be reduced.  The type of work will change, and the distribution of tech responsibility will be spread out, but there will still be a high demand for technology expertise.
  • The lines between individual networks will fade. We’ll do business on shared platforms like Salesforce, Box, and {insert your favorite social media platform here}.  Sharing content with external partners and constituents will be far simpler. One network, pervasive computing, no more firewalls (well, not literally — security is still a huge thing that needs to be managed).

This all sounds good! Less IT controlling what you can and can’t do. Consumerization demystifying technology and making it more usable.  No more need to toss around acronyms like “VPN.”

Of course, long after this future arrives, many nonprofits will still be doing things the old-fashioned ways.  Adapting to and adopting these new technologies will require some changes in our organizational cultures.  If technology is going to become less of a specialty and more of a commodity, then technical competency and comfort using new tools need to be common attributes of every employee. Here are the stereotypes that must go away today:

  1. The technophobic executive. It is no longer allowable to say you are qualified to lead an organization or a department if you aren’t comfortable thinking about how technology supports your work.  It disqualifies you.
  2. The control freak techie.  They will fight the adoption of consumer technology with tooth and claw, and use the potential security risks to justify their approach. Well, yes, security is a real concern.  But the risk of data breaches has to be balanced against the lost business opportunities we face when we restrict all technology innovation. I blogged about that here.
  3. The paper-pushing staffer. All staff should have basic data management skills; enough to use a spreadsheet to analyze information and understand when the spreadsheet won’t work as well as a database would.
  4. Silos, big and small. The key benefit of our tech future is the ability to collaborate, both inside our company walls and out. So data needs to be public by default; secured only when necessary.  Policy and planning has to cross department lines.
  5. The “technology as savior” trope. Technology can’t solve your problems.  You can solve your problems, and technology can facilitate your solution. It needs to be understood that big technology implementations have to be preceded by business process analysis.  Otherwise, you’re simply automating bad or outdated processes.

I’m looking forward to the future, and I can’t wait to dive into these ideas and more about how we use tech to enhance our operations, collaborate with our community and constituents, and change the world for the better.   Does this all sound right to you? What have I got wrong, and what have I missed?

Get Your IT In Order — I Can Help

While I look for that new job (see below), I’m available for IT consulting gigs.

Not every NPO has a full-time IT Director, and outsourced services can provide some guidance, but many of them aren’t focused on the particular needs of nonprofits.  I’ve had considerable experience running IT Departments, consulting and advising NPOs, and developing strategies for maximizing the impact of technology in resource-constrained environments. This gives me a unique skill set for providing mission-focused guidance on these types of questions:

  • What should IT look like in my organization? In-house or outsourced, or a mix? Where should It report in? How much staff and budget is required in order to get the desired outcomes?
  • What type of technology do we need? In-house or cloud-based? How well does what we have serve our mission, and how would we replace it?
  • We’re embarking on a new systems or database project (fundraising/CRM, HR/finance, e-commerce, outcomes measurement/ client tracking, virtualization, VOIP phones – you name it). How can we insure that the project will be technologically sound and sustainable, while meeting our strategic needs?

The services and deliverables that I can offer include:

  • Assessments
  • Strategic plans
  • Staffing plans
  • Immediate consulting and/or project management on current projects
  • Acting CIO/Director status to help put things in order

If you want some tactical guidance in these areas, please get in touch.

 

What’s Up With The TechSoup Global/GuideStar International Merger?

This article was first published on the Idealware Blog in April of 2010.

TechSoup/GuideStar Int'l Logos

TechSoup Global (TSG) mergedwith GuideStar International (GSI) last week. Idealware readers are likely well-familiar with TechSoup, formerly CompuMentor, a nonprofit that supports other nonprofits, most notably through their TechSoup Stock software (and hardware) donation program, but also via countless projects and initiatives over the last 24 years. GuideStar International is an organization based in London that also works to support nonprofits by reporting on their efforts and promoting their missions to potential donors and supporters.

I spoke with Rebecca Masisak and Marnie Webb, two of the three CEOs of TechSoup Global (Daniel Ben-Horin is the founder and third CEO), in hopes of making this merger easier for all of us to understand. What I walked away with was not only context for the merger, but also a greater understanding of TechSoup’s expanded mission.

Which GuideStar was that?

One of the confusing things about the merger is that, if you digested the news quickly, you might be under the impression that TechSoup is merging with the GuideStar that we in the U. S. are well acquainted with. That isn’t the case. GuideStar International is a completely separate entity from GuideStar US, but with some mutual characteristics:

  • Both organizations were originally founded by Buzz Schmidt, the current President of GuideStar International;
  • They share a name and some agreements as to branding;
  • They both report on the efforts of charitable organizations, commonly referred to as nonprofits (NPOs) in the U.S.; Civil Society Organizations (CSOs) in the U.K.; or Non Governmental Organizations (NGOs) across the world.

Will this merger change the mission of TechSoup?

TechSoup Global’s mission is working toward a time when every nonprofit and NGO on the planet has the technology resources and knowledge they need to operate at their full potential.

GuideStar International seeks to illuminate the work of every civil society organisation (CSO) in the world.

Per Rebecca, TechSoup’s mission has been evolving internally for some time. The recent name change from TechSoup to TechSoup Global is a clear indicator of their ambition to expand their effectiveness beyond the U.S. borders, and efforts like NGOSource, which helps U.S. Foundations identify worthy organizations across the globe to fund, show a broadening of their traditional model of coordinating corporate donors with nonprofits.

Unlikely Alliances

TechSoup opened their Fundacja TechSoup office in Warsaw, Poland two years ago, in order to better support their European partners and the NGO’s there. They currently work with 32 partners outside of the United States. The incorporation of GSI’s London headquarters strengthens their European base of operations, as well as their ties to CSOs, as both TechSoup and GSI have many established relationships. GSI maintains an extensive database, and TechSoup sees great potential in merging their strength, as builders of relationships between entities both inside and outside of the nonprofit community, with a comprehensive database of organization and missions.

This will allow them, as Rebecca puts it, to leverage an “unlikely alliance” of partners from the nonprofit/non-governmental groups, corporate world, funders and donors, and collaborative partners (such as Idealware) to educate and provide resources to worthwhile organizations.

Repeatable Practices

After Rebecca provided this context of TSG’s mission and GSI’s suitability as an integrated partner, Marnie unleashed the real potential payload. The goal, right in line with TSG’s mission, is to assist CSOs across the globe in the task of mastering technology in service to their missions. But it’s also to take the practices that work and recreate them. With a knowledge base of organizations and technology strategies, TechSoup is looking to grow external support for the organizations they serve by increasing and reporting on their effectiveness. Identify the organizations, get them resources, and expose what works.

All in all, I’m inspired by TSG’s expanded and ambitious goals, and look forward to seeing the great things that are likely to come out of this merger.

NPO Evaluation, IE6, Still Waters for Wave

This post was first published on the Idealware blog in January of 2010.

Here are a few updates topics I’ve posted on in the last few months:

Nonprofit Assessment

The announcement that GuideStar, Charity Navigator and others would be moving away from the 990 form as their primary source for assessing nonprofit performance raised a lot of interesting questions, such as “How will assessments of outcomes be standardized in a way that is not too subjective?” and “What will be required of nonprofits in order to make those assessments?” We’ll have a chance to get some preliminary answers to those questions on February 4th, when NTEN will sponsor a phone-in panel discussion with representatives of GuideStar and Charity Navigator, as well as members of the nonprofit community. The panel will be hosted by Sean Stannard-Stockton of Tactical Philanthropy, and will include:

I’ll be participating as well. You can learn more and register for the free event with NTEN.

The Half-Life of Internet Explorer 6

It’s been quite a few weeks as far as headlines go, with a humanitarian crisis in haiti; a dramatic election in Massachusetts; A trial to determine if California gay marriage-banning proposition is, in fact, discriminatory; high profile shakeups in late night television and word of the Snuggie, version 2 all competing for our attention. An additional, fascinating story is unfolding with Google’s announcement that they might pull their business out of China in light of a massive cybercrime against critics of the Chinese regime that, from all appearances, was either performed or sanctioned by the Chinese government. There’s been a lot of speculation about Google’s motives for such a dramatic move, and I fall in the camp that says, whatever their motives, it’s refreshing to see a gigantic U.S. corporation factor ethics into a business decision, even if it’s unclear exactly what the complete motivations are.

As my colleague Steve Backman fully explains here, here’s been some fallout from this story for Microsoft. First, like Google and Yahoo!, Microsoft operates a search engine in China and submits to the Chinese governments censoring filters. They’ve kept mum on their feelings about the cyber-attack. Google’s analysis of that attack reveals that GMail accounts were hacked and other breaches occurred via security holes in Internet Explorer, versions six and up, that allow a hacker to upload programs and take control of a user’s PC. As this information came to light, France and Germany both issued advisories to their citizens that switching to a browser other than Internet Explorer would be prudent. In response, Microsoft has issued a statement recommending that everyone upgrade from Internet Explorer version 6 to version 8, the current release. What Microsoft doesn’t mention is that the security flaw exists in versions seven and eight as well as six, so upgrading won’t protect you from the threat, although they just released a patch that hopefully will.

So, while their reasoning is suspect, it’s nice to see that Microsoft has finally joined the campaign to remove this old, insecure and incompatible with web standards browser.

Google Wave: Still Waters

I have kept Google Wave open in a tab in my browser since the day my account was opened, subscribed to about 15 waves, some of them quite well populated. I haven’t seen an update to any of these waves since January 12th, and it was really only one wave that’s gotten any updates at all in the past month. I can’t give away the invites I have to offer. The conclusion I’m drawing is that, if Google doesn’t do something to make the Wave experience more compelling, it’s going to go the way of a Simply Red B-Side and fade from memory. As I’ve said, there is real potential here for something that puts telecommunication, document creation and data mining on a converged platform, and that would be new. But, in it’s current state, it’s a difficult to use substitute for a sophisticated Wiki. And, while Google was hyping this, Confluence released a new version of their excellent (free for nonprofits) enterprise Wiki that can incorporate (like Wave) Google gadgets. That makes me want to pack up my surfboard.

NPTech Lineup Details

Details have come in for two exciting events in February:

On Thursday, February 4th, at 11:00 am Pacific/2:00 pm Eastern, don’t miss The Overhead Question: The Future of Nonprofit Assessment and Reporting. This panel discussion with represenatives from Charity Navigator and Guidestar will cover all of the questions I’ve been blogging about here. Join me with moderator Sean Stannard-Stockton of Tactical Philanthropy, Bob Ottenhoff of Guidestar, Lucy Bernholtz of Blueprint R & D, Christine Egger of Social Actions, David Geilhufe of NetSuite, and host Holly Ross of NTEN. Free registration is here.

And on Wednesday, February 10th, from 10:00 to 2:00 Pacific (1:00 to 5:00 Eastern), NTEN and the Green IT Consortium are putting on the first Greening Your IT Virtual Conference. With a plenary by Joseph Khunaysir of Jolera Inc. and six tactical sessions explaining how your org can benefit yourselves and the earth, including the one I’m co-presenting with Matt Eshleman of CITIDC on Server Virtualization.  Registration is $120, and it looks well worth it.

Get Ready For A Sea Change In Nonprofit Assessment Metrics

This post was originally published on the Idealware Blog in December of 2009.

watchdogs.png

Last week, GuideStar, Charity Navigator, and three other nonprofit assessment and reporting organizations made a huge announcement: the metrics that they track are about to change.  Instead of scoring organizations on an “overhead bad!” scale, they will scrap the traditional metrics and replace them with ones that measure an organization’s effectiveness.

  • The new metrics will assess:
  • Financial health and sustainability;
  • Accountability, governance and transparency; and
  • Outcomes.

This is very good news. That overhead metric has hamstrung serious efforts to do bold things and have higher impact. An assessment that is based solely on annualized budgetary efficiency precludes many options to make long-term investments in major strategies.  For most nonprofits, taking a year to staff up and prepare for a major initiative would generate a poor Charity Navigator score. A poor score that is prominently displayed to potential donors.

Assuming that these new metrics will be more tolerant of varying operational approaches and philosophies, justified by the outcomes, this will give organizations a chance to be recognized for their work, as opposed to their cost-cutting talents.  But it puts a burden on those same organizations to effectively represent that work.  I’ve blogged before (and will blog again) on our need to improve our outcome reporting and benchmark with our peers.  Now, there’s a very real danger that neglecting to represent your success stories with proper data will threaten your ability to muster financial support.  You don’t want to be great at what you do, but have no way to show it.

More to the point, the metrics that value social organizational effectiveness need to be developed by a broad community, not a small group or segment of that community. The move by Charity Navigator and their peers is bold, but it’s also complicated.  Nonprofit effectiveness is a subjective thing. When I worked for a workforce development agency, we had big questions about whether our mission was served by placing a client in a job, or if that wasn’t an outcome as much as an output, and the real metric was tied to the individual’s long-term sustainability and recovery from the conditions that had put them in poverty.

Certainly, a donor, a watchdog, a funder a, nonprofit executive and a nonprofit client are all going to value the work of a nonprofit differently. Whose interests will be represented in these valuations?

So here’s what’s clear to me:

– Developing standardized metrics, with broad input from the entire community, will benefit everyone.

– Determining what those metrics are and should be will require improvements in data management and reporting systems. It’s a bit of a chicken and egg problem, as collecting the data wis a precedent to determining how to assess it, but standardizing the data will assist in developing the data systems.

– We have to share our outcomes and compare them in order to develop actual standards.  And there are real opportunities available to us if we do compare our methodologies and results.

This isn’t easy. This will require that NPO’s who have have never had the wherewith-all to invest in technology systems to assess performance do so.  But, I maintain, if the world is going to start rating your effectiveness on more than the 990, that’s a threat that you need to turn into an opportunity.  You can’t afford not to.

And I look to my nptech community, including Idealware, NTEN, Techsoup, Aspiration and many others — the associations, formal, informal, incorporated or not, who advocate for and support technology in the nonprofit sector — to lead this effort.  We have the data systems expertise and the aligned missions to lead the project of defining shared outcome metrics.  We’re looking into having initial sessions on this topic at the 2010 Nonprofit Technology Conference.

As the world starts holding nonprofits up to higher standards, we need a common language that describes those standards.  It hasn’t been written yet.  Without it, we’ll escape the limited, Form 990 assessments to something that might equally fail to reflect our best efforts and outcomes.

Paving the Road – a Shared Outcomes Success Story

This post was originally published on the Idealware blog in July of 2009.

I recently wrote about the potential for shared outcome reporting among nonprofits and the formidable challenges to getting there. This topic hits a chord for those of us who believe strongly that proper collection, sharing and analysis of the data that represents our work can significantly improve our performance and impact.

Shared outcome reporting allows an organization to both benchmark their effectiveness with peers, and learn from each others’ successful and failed strategies. If your most effective method of analyzing effectiveness is year to year comparisons, you’re only measuring a portion of the elephant. You don’t practice your work in a vacuum; why analyze it in one?

But, as I wrote, for many, the investment in sharing outcomes is a hard sell. Getting there requires committing scarce time, labor and resources to the development of the metrics, collection of data, and input; trust and competence in the technology; and partnering with our peers, who, in many cases, are also our competitors. And, in conditions where just keeping up with the established outcome reporting required for grant compliance is one of our greater challenges, envisioning diving into broader data collection, management and integration projects looks very hard to justify.

So let’s take a broader look this time at the justifications, rather than the challenges.

Success Measures is a social enterprise in DC that provides tools and consulting to organizations that want to evaluate their programs and services and use the resulting data. From their website:

Success Measures®, a social enterprise at NeighborWorks® America is an innovative participatory outcome evaluation approach that engages community stakeholders in the evaluation process and equips them with the tools they need to document outcomes, measure impact and inform change.

To accomplish this, in 2000, they set up an online repository of surveying and evaluation tools that can be customized by the participant to meet their needs. After determining what it is that they want to measure, participants work with their constituencies to gather baseline data. Acting on that data, they can refine their programs and address needs, then, a year or two later, use the same set of tools to re-survey and learn from the comparative data. Success Measures supplements the tools collection with training, coaching, and consulting to insure that their participants are fully capable of benefiting from their services. And, with permission, they provide cross-client metrics; the shared outcomes reporting that we’re talking about.

The tools work on sets of indicators, and they provide pre-defined sets of indicators as well as allowing for custom items. The existing sets cover common areas: Affordable housing; community building; economic development; race, class and community. Sets currently under development include green building/sustainable communities; community stabilization; measuring outcomes of asset programs; and measuring value of intermediary services.

Note that this resources nonprofits on both sides of the equation — they not only provide the shared metrics and accompanying insight into effective strategies for organizations that do what you do; they also provide the tools. This addresses one of the primary challenges, which is that most nonprofits don’t have the skills and staff required simply to create the surveying tools.

Once I understood what Success Measures was offering, my big question was, “how did you get any clients?” They had good answers. They actually engage more with the funders than the nonprofits, selling the foundations on the value of the data, and then sending them to their grantees with the recommendation. This does two important things:

  • First, it provides a clear incentive to the nonprofits. The funders aren’t just saying “prove that you’re effective”; they’re saying “here’s a way that you can quantify your success. The funding will follow.
  • Second, it provides a standardized reporting structure — with pre-developed tools and support — to the nonprofits. In my experience, having worked for an organization with multiple city, state and federal grants and funded programs, keeping up with the diverse requirements of each funding agency was an administrative nightmare.

So, if the value of comparative, cross-sector metrics isn’t reason enough to justify it, maybe the value of pre-built data collection tools is. Or, maybe the value of standardized reporting for multiple funding sources has a clear cost benefit attached. Or, maybe you’d appreciate a relationship with your funders that truly rewards you with grants based on your effectiveness. Success Measures has a model for all of the above.

The Perfect Fit: A Guide To Evaluating And Purchasing Major Software Systems

This article was originally published at Idealware in September of 2008.

A major software package shouldn’t be chosen lightly. In this detailed guide, Peter Campbell walks through how to find software options, evaluate them, make a good decision, and then purchase the system in a way that protects you.

cd-437723_640 A smart shopper evaluates the item they want to purchase before putting money down. You wouldn’t shop for shoes without checking the size and taking a stroll up and down the aisle in order to make sure they fit, would you? So what’s the equivalent process of trying on a software package will size? How can you make sure your substantial software purchase won’t leave you sore and blistered after the cash has been exchanged?

That’s the goal of this article—to provide some guidance for properly evaluating major software investments. We’ll walk through how to find potential software options, gather the detailed information you need to evaluate them, make a solid decision and purchase a package in a way that protects you if it doesn’t do what you hoped it would for you.

Is it A Major Software System?

The evaluation process described here is detailed, so it’s probably not cost effective to apply it to every software tool and utility you purchase. How do you know if the package you’re considering is major enough to qualify? Major systems have a dramatic impact on your ability to operate and achieve your mission—they aren’t measured by budget, they’re measured by impact.

To help identify a major purchase, ask yourself:

  • Will the application be used by a significant percentage of your staff?
  • Will multiple departments or organizational units be using it?
  • Will this software integrate with other data systems?
  • If this software becomes unstable or unusable once deployed, will it have significant impact on your nonprofit’s ability to operate?

Giving significant attention to these types of major purchases is likely to save your organization time in the long run.

 

Taking Preliminary Measurements

Prior to even looking at available software options, make sure you thoroughly define your needs and what the application you select should be able to do for you. Nonprofits are process-driven. They receive, acknowledge, deposit and track donations; they identify, serve and record transactions with clients; and they recruit, hire and manage employees. Technology facilitates the way your organization manages these processes. A successful software installation will make this work easier, more streamlined and more effective. But a new system that doesn’t take your processes and needs into account will only make running your organization more difficult.

So it’s critical that, before you begin looking for that donor database or client-tracking system, you clearly understand the processes that need to be supported and the software features critical to support that work.

This is an important and complex area that could easily be an article—or a book—in its own right. We could also write numerous articles that delve into project management, getting company buy-in and change management—all critical factors in organizational readiness. However, for the purposes of this article, we’re focusing on the process of evaluating and purchasing software once you’ve already identified your needs and prepped the organization for the project.

Finding the Available Options

Once you know what you need and why you need it, the next step is to identify the pool of applications that might fit. An expert consultant can be a huge help. A consultant who knows the market and is familiar with how the systems are working for other nonprofits can save you research time, and can direct you to systems more likely to meet your true needs. While a consultant can be more expensive than going it alone, money spent up front on the selection and planning phases is almost always recouped through lower costs and greater efficiency down the road.

If a consultant isn’t warranted, take advantage of the resources available to the nonprofit community, such as Idealware, Social Source Commons, Techsoup’s forums or NTEN’s surveys. Ask your peers what they’re using, how they like it and why. Ideally you want to identify no less than three, and probably no more than eight, suitable products to evaluate.

Considering an RFP

With your list of possible software candidates in hand, the next step is to find out more about how those packages meet your needs. This is traditionally done through a Request for Proposal (RFP), a document that describes your environment and asks for the information you need to know about the products you’re evaluating.

Well-written RFPs can be extremely valuable for understanding the objective aspects of large software purchases. For example, if you are looking for a Web site content management system (CMS), questions such as “does the blogging feature support trackbacks?” or “Can the CMS display individualized content based on cookie or user authentication?” are good ones for an RFP.

What you want from the RFP is information you can track with checkboxes. For example, “It can/can’t do this,” “It can/can’t export to these formats: XML, SQL, CSV, PDF,” or “They can program in PHP and Ruby, but not Java or Cold Fusion.” Questions that encourage vendors to answer unambiguously, with answers that can be compared in a simple matrix, will be useful for assessing and documenting the system capabilities.

An RFP can’t address all the concerns you’re likely to have. Subjective questions like “How user-friendly is your system?” or “Please describe your support” are unlikely to be answered meaningfully through an RFP process.

Certainly, you can arrange for demonstrations, and use that opportunity to ask your questions without going through an RFP process. But while the formality of an RFP might seem unnecessary, there are some key reasons for getting your critical questions answered in writing:

  • You can objectively assess the responses and only pursue the applications that aren’t clearly ruled out, saving some time later in the process.
  • A more casual phone or demo approach might result in different questions asked and answered by different vendors. An RFP process puts all of the applications and vendors on a level field for assessing.
  • The RFP responses of the vendor you select are routinely attached to the signed contract. An all-too-common scenario is that the vendor answers all of your questions with “yes, yes, yes,” but the answers change once you start to implement the software. If you don’t have the assurances that the software will do what you require in writing, you won’t have solid legal footing to void a contract.

Structuring Your RFP

RFPs work well as a four section document. Below, we walk through each of those sections.

Introduction

The introduction provides a summary of your organization, mission and the purpose of the RFP

Background

The background section provides context the vendor will need to understand your situation. Consider including a description of your organization—for instance, number of locations, number of staff and organizational structure, the processes the system should support, and such technology infrastructure as network operating system(s) and other core software packages. Include any upcoming projects that might be relevant.

Questionnaire

The questionnaire is the critical piece of the document—you want to be sure you ask all of the questions that you need answered. In preparing these questions, it’s best to envision what the vendor responses might look like. What will have to be in those responses for you to properly assess them? Consider asking about:

  • Functionality. In order to get answers you’ll be able to compare, ask your questions at a granular level. Does a CRM support householding? Does a donor database have a method for storing soft credits? Can multiple users maintain and view records of donor interactions? Can alerts or notifications be programmed in response to particular events? Use the results of your business requirements work to focus in on the functions that are critical to you and your more unusual needs.
  • Technology specifics. Make sure the software will integrate properly with other applications, that the reporting is robust and customizable by end users, and that the platform is well-supported. Ask which formats data can be exported to and imported from, how many tables can be queried simultaneously and what type of support is available—both from the vendor and third parties. Ask for a data dictionary, which a technical staffer or consultant can review, because a poorly designed database will complicate reporting and integration. And ask for a product roadmap. If the next version is going to be a complete rewrite of the application, you might want to rule out the current version for consideration.
  • Company information. Think through what you’ll want to know about the company itself. How big is it? Do they have an office near you? How long have they been in business? Are they public or private? Can they provide some documentation of financial viability? Who are the staff members that would be assigned to your project? References from similar clients with similar-scope projects can also be very useful. For more information on this area, see Idealware’s article Vendors as Allies: How to Evaluate Viability, Service, and Commitment.
  • Pricing and availability. What are their hourly rates, broken down by role, if applicable? What are their payment terms? What is their total estimate for the project as described? How do they handle changes in project scope that might arise during implementation? What are their incidental rates and policies (travel, meals)? Do they discount their services or software costs for 501(c)(3)s? How long do they estimate this project will take? When are they available to start?

 

While it’s important to be thorough, don’t ask a lot of questions you don’t plan to actually use to evaluate the systems. Asking questions “just in case” increases the amount of information you’ll need to sift through later, and increases the possibility that vendors might decide your RFP isn’t worth the time to respond to.

Instructions

Close with a deadline and details about how to submit replies. For a sizeable RFP, allow a minimum of four to six weeks for a response. Remember that this isn’t a confrontational process—a good vendor will appreciate and want to work with a client that has thought things out this well, and the questionnaire is also an opportunity for them to understand the project up front and determine their suitability for it. Respect their schedules and give them ample time to provide a detailed response.

Include an indication as to how additional questions will be handled. In general, if one vendor asks for clarification or details, your answers should be shared with all of the RFP participants. You want to keep things on a level playing field, and not give one vendor an advantage over the rest. You might do this via a group Q&A, with all the vendors invited to participate in a meeting or conference call after the RFP has been sent to them but well before they are due to respond. With all vendors asking their questions in the same room, you keep them all equally informed. Alternatively, you can specify a deadline by which written questions must be submitted. All participants would then receive the questions and answers.

Evaluating the Answers

Once you receive RFP responses, you’ll need to winnow down your list to determine which packages you’d like to demo.

If you asked straightforward, granular questions, you’ll now reap the benefit: you can set up a comparative matrix. Create a table or spreadsheet with columns for each vendor and rows for each question, summarizing the responses as much as possible in order to have a readable chart. You might add columns that weight the responses, both on the suitability of the vendor’s response (e.g. 1, unacceptable; 2, fair; 3, excellent) and/or on the importance of the question (for instance, some features are going to be much more important to you than others).

Going through the features and technology sections, you’ll see the strong and weak points of the applications. In determining which fit your needs, there will likely be some trade-offs—perhaps one application has a stronger model for handling soft credits, but another has more flexible reporting. It’s unlikely that any will jump out as the perfect application, but you’ll be able to determine which are generally suitable, and which aren’t.

For example, if you’re looking for software to manage your e-commerce activities, inventory management might be a critical function for you. If a submitted software package lacks that feature, then you’ll need to eliminate it.  As long as you understand your own critical needs, the RFP responses will identify unsuitable candidates.

You might rule out a vendor or two based on what the RFP response tells you about their availability or company stability. Take care, though, in eliminating vendors based on their RFP pricing information. RFP responses can be very subjective. Before determining that a vendor is too pricy based on their project estimate, dig deeper—other vendors might be underestimating the actual cost. If you feel you have a solid grasp on the project timeline, use the hourly rates as a more significant measurement.

The RFP responses will tell you a lot about the vendors. You’re asking questions that are important to your ability to operate. Their ability to read, comprehend and reasonably reply to those questions will offer a strong indication as to how important your business is to them, and whether they’ll consider your needs as the software is implemented and into the future. If they respond (as many will) to your critical questions with incomplete answers, or with stacks of pre-printed literature—saying, in effect, “the answers are in here”–then they’re telling you they won’t take a lot of time to address your concerns.

Keep in mind, though, that a weak sales representative might not mean a weak vendor, particularly if they’re representing a product that comes recommended or looks particularly suitable on all other fronts. It’s acceptable to reject the response and ask the vendor to resubmit if you really feel they have done you, and themselves, a disservice—but temper this with the knowledge that they blew it the first time.

Trying It All On for Size

At this point the process will hopefully have narrowed the field of potential applications down to three-to-five options. The next step is to schedule software demos. A well-written RFP will offer important, factual and comprehensive details about the application that might otherwise be missed, either by too narrow a demo or by one the vendor orchestrates to highlight product strengths and gloss over weaknesses. But the demos serve many additional purposes:

  • Evaluating look and feel. As good as the specs might look, you’ll know quickly in a demo if an application is really unusable. For instance, an application might technically have that great Zip code lookup feature you asked about in the RFP, but it may be implemented in a way that makes it a pain to use. Prior to the demo, try to present the vendors with a script of the functions you want to see. It can also be useful to provide them with sample data, if they are willing—evaluating a program with data similar to your own data will be less distracting. Be careful not to provide them with actual data that might compromise your—or your constituents’—privacy and security. The goal is to provide a level and familiar experience that unifies the demos and puts you in the driver’s seat, not the vendor.
  • Cross training. The demo is another opportunity for the vendor to educate you regarding the operating assumptions of the software, and for you to provide them with more insight into your needs. A generic donor management system is likely to make very good assumptions about how you track individuals, offer powerful tools for segmentation and include good canned reports, because the donor-courting processes are very similar. But in less standardized areas—or if you have more unusual needs—the model used by the software application can differ dramatically from your internal process, making it difficult for your organization to use. Use the demo to learn how the software will address your own process and less conventional needs.
  • Internal training. Even more valuable is the opportunity to use the demos to show internal staff what they’ll be able to do with the software. Demos are such a good opportunity to get staff thinking about the application of technology that you should pack the room with as many people as you can. Get a good mix of key decision-makers and application end-users—the people who design and perform the business processes the software facilitates. The people who will actually use the software are the ones who can really tell if the package will work for them.

Making the Decision

With luck, your vendor selection process will now be complete, with one package clearly identified as the best option. If key constituents are torn between two options or unimpressed with the lot, senior decision-makers might have to make the call. Be careful, however, not to alienate a group of people whose commitment and enthusiasm for the project might be needed.

If none of the applications you evaluated completely meets your needs, but one comes close, you might consider customizations or software modifications to address the missing areas. Note that any alterations of the basic software package will likely be costly, will not be covered in the packaged documentation and help files, and might break if and when you upgrade the software. Be very sure there isn’t an alternate, built-in way to accomplish your goal. If f the modification is justified, make sure it’s done in such a way that it won’t be too difficult to support as the software is developed.

Before making a final decision, you should always check vendor references, but take them with a healthy grain of salt. An organization’s satisfaction with software depends not only on how well it meets their needs, but how familiar they are with their options—there are a lot of people who are happy using difficult, labor-heavy, limited applications simply because they don’t know there are better alternatives.

If you still have a tie after RFPs, demos and reference checks, the best next step is to conduct on-site visits with an existing customer for each software package. As with demos, bring a representative group of management, technical staff and users. Assuming the reference can afford the time to speak with you, the visit will highlight how the software meets their needs, and will give you a good, real world look at its strengths and weaknesses. You’ll also likely walk away with new ideas as to how you might use it.

Signing on the Dotted Line

You’ve selected an application. Congratulations! You might be tired, but you aren’t finished yet. You still need to work with the vendor to define the scope of the engagement, and an agreement that will cover you in case of problems. A good contract clearly articulates and codifies everything that has been discussed to date into a legally binding agreement. If, down the road, the vendor isn’t living up to their promises, or the software can’t do what you were told it would do, then this is your recourse for getting out of an expensive project.

Contract negotiations can take time. It’s far more dangerous to sign a bad contract in the interest of expediency, though, than it is to delay a project while you ensure that both parties—you and the vendor—completely understand each other’s requirements. Don’t start planning the project until the papers have been signed.

A software contract should include a number of parts, including the actual agreement, the license, the scope of work and the RFP.

The Agreement

This is the legal document itself, with all of the mumbo jumbo about force majeure and indemnity. The key things to look for here are:

  • Equal terms and penalties. Are terms and penalties equally assessed? Vendors will write all sorts of terms into contracts that outline what you will do or pay if you don’t live up to your end of the agreement. But they’ll often leave out any equivalent controls on their behavior. You should find every “if this happens, customer will do this” clause and make sure the conditions are acceptable, and that there are complementary terms specified for the vendor’s actions.
  • Reasonable cancellation penalties. If there are penalties defined for canceling a consulting or integration contract, these should not be exorbitant. It’s reasonable for the vendor to impose a limited penalty to cover expenses incurred in anticipation of scheduled work, such as airfare purchased or materials procured. But unless this is a fixed cost agreement, which is highly unusual, don’t let them impose penalties for work they don’t have to do—for example, for a large percentage of the estimated project cost.
  • Agreement under the laws of a sensible state. If the vendor is in California, and you’re in California, then the agreement should be covered by California laws rather than some random other state. In particular, Virginia’s laws highly favor software companies and vendors. In most cases, you want the jurisdiction to be where you live, or at least where the vendor’s headquarters actually are.

The Software License

The license specifies the allowed uses of the software you’re purchasing. This, too, can contain some unacceptable conditions.

  • Use of your data. A software license should not restrict your rights to access or work with your data in any way you see fit. The license agreement will likely contain conditions under which the software warranty would be voided. It’s perfectly acceptable for a commercial software vendor to bar re-engineering their product, but it’s not acceptable for them to void the warranty if you are only modifying the data contained within the system. So conditions that bar the exporting, importing, archiving or mass updating of data should be challenged. If the system is hosted, the vendor should provide full access to your data, and the license should include language providing that client shall have reasonable access for using, copying and backing up all customer information in the database. There should be no language in the contract implying that the vendor owns your data, or that they can use it for any additional purposes.
  • Responsibility avoidance. Software warranties should not include blanket “software provider is not responsible if nothing works” statements. This shouldn’t need to be said, but, sadly, there are often warranty sections in license agreements that say just that.
  • Back doors. The license should not allow for any post-sale reversals of licensing, such as language stating that the contract will be void if the customer uses the software in perfectly reasonable ways they don’t anticipate. For instance, if you want to use the CRM functions of your donor database to track contacts that aren’t potential donors, you shouldn’t sign a contract limiting use of the software to “fundraising purposes”. Also, there should not be any “back doors” programmed into the application that the vendor can maintain for purposes of disabling the software.

The Scope of Work

The Scope of Work (SOW) describes exactly what the project will consist of. It’s an agreement between the vendor and the customer as to what will happen, when, and how long it will take. Good scopes include estimates of hours and costs by task and/or stage of the project. The scope should be attached as a governing exhibit to the contract. Usually, this is negotiated prior to receiving the actual contract. By having it attached to the contract, the vendor is now legally obligated to, basically, do what they said they would do.

The RFP

Like the Scope of Work, the RFP should also be attached as a governing document that assures that the software does what the vendor claimed it would.

In Conclusion

For big ticket purchases, it’s well worth having an attorney review or assist in negotiations. Keep in mind that the goal is to end up with a contract that equally defends the rights of both parties. True success, of course, is a solid contract that is never revisited after signing. Litigation doesn’t serve anyone’s interest.

Bringing It Home

There’s a lot of talk and plenty of examples of technology jumpstarting an organization’s effectiveness. But if someone were to do the tally, there would probably be more stories of the reverse. All too often, organizations make decisions about their software based on uninformed recommendations or quick evaluations of the prospective solutions. Decisions are often based more on expediency than educated selection.

Rushing a major investment can be a critical error. Learn about the available options, thoroughly assess their suitability to your needs and prepare your staff to make the most of them. Then, sign a contract that protects you if, after all else is done, the application and/or vendor fails to live up to the promises. Finding the right application and setting it up to support, not inhibit, your workflow is a matter of finding something that really fits. You can’t do that with your eyes closed.

 

For More Information

Vendors as Allies: How to Evaluate Viability, Service, and Commitment
An Idealware article on how to think specifically about the less-concrete aspects of software selection.

How To Find Data-Exchange-Friendly Software
An overview of how to ensure you’re going to be able to get data in and out of a software package. (For much more detailed considerations, see our Framework to Evaluate Data Exchange Features.)

 

Peter Campbell is the director of Information Technology at Earthjustice, a nonprofit law firm dedicated to defending the earth, and blogs about NPTech tools and strategies at Techcafeteria.com. Prior to joining Earthjustice, Peter spent seven years serving as IT Director at Goodwill Industries of San Francisco, San Mateo, and Marin Counties, and has been managing technology for non-profits and law firms for over 20 years.

Robert Weiner and Steve Heye also contributed to this article.