Tag Archives: data management

Why I’m Intrigued By Google’s Inbox

Google Inbox logoHere we go again! Another communication/info management Google product that is likely doomed to extinction (much like recent social networks I’ve been blogging about), and I can’t help but find it significant and important, just as I did Google Wave, Google Buzz, and the much-loved Google Reader. I snagged an early invite to Google’s new “Inbox” front-end to GMail, and I’ve been agonizing over it for a few weeks now.  This app really appeals to me, but I’m totally on the fence about actually using it, for a few reasons:

  • This is either a product that will disappear in six months, or it’s what Gmail’s standard interface will evolve into.  It is absolutely an evolved version of recent trends, notably the auto-sorting tabs they added about a year ago.
  • The proposition is simple: if you let Google sort your mail for you, you will no longer have to organize your mail.

I’ve blogged before about how expensive an application email is to maintain, time-wise. We get tons of email (I average over a hundred messages a day between work and home), and every message needs to be managed (deleted, archived, labeled, dragged to a folder, etc.), unlike texts and social media, which you can glance at and either reply or ignore. The average email inbox is flooded with a wide assortment of information, some useless and offensive (“Meet Beautiful Russian Women”), some downright urgent (“Your Aunt is in the Hospital!”), and a range of stuff in-between. If you get 21 messages while you’re at an hour-long meeting, and the first of the 21 is time-sensitive and critical, it’s not likely the first one that you are going to read, as it has scrolled below the visible part of your screen. The handful of needles in the crowded haystack can be easily lost forever.

Here’s how Inbox tries to make your digital life easier and less accident-prone:

  • Inbox assumes (incorrectly) that every email has three basic responses: You want to deal with it soon (keep it in the inbox); you want to deal with it later (“snooze” it with a defined time to return to the inbox); or you want to archive it. They left out delete it, currently buried under a pop-up menu, which annoys me, because I believe that permanently deleting the 25% of my email that can be glanced at (or not even opened) and deleted is a cornerstone of my inbox management strategy. But, that nit aside, I really agree with this premise.
  • Messages fall in categories, and you can keep a lot of the incoming mail a click away from view, leaving the prime inbox real estate to the important messages. Inbox accomplishes this with “Bundles“. which are the equivalent to the presorted tabs in Classic GMail.  Your “Promotions”, Updates” and “Social” bundles (among other pre-defineds) group messages, as opposed to putting each incoming message on it’s own inbox line. I find the in-list behavior more intuitive than the tabs. You can create your own bundles and teach them to auto-sort — I immediately created one for Family, and added in the primary email addresses for my immediate loved ones.  We’ll see what it learns.
  • Mail doesn’t need to be labeled (you can still label messages, but it’s not nearly as simple a task as it is in GMail classic). This is the thing I’m wrestling with most — I use my labels.  I have tons of filters defined that pre-label messages as they come in, and my mailbox cleanup process labels what’s missed. I go to the labels often to narrow searches. I totally get that this might not be necessary — Google’s search might be good enough that my labeling efforts are actually more work than just searching the entire inbox each time. But I’m heavily invested in my process.
  • Highlights” act a bit like Google Now, popping up useful info like flight details and package tracking.

One important note: Inbox does nothing to alter or replace your Gmail application.  It’s an alternative interface. When you archive, delete or label a message in Inbox, it gets archived, deleted or labeled in GMail as well, but Gmail knows nothing about bundles and, therefore, doesn’t reflect them, and not one iota of GMail functionality changes when you start using Inbox.  You do start getting double notifications, and Inbox offered to turn off GMail notifications for me if I wanted to fix that. I turned Inbox down and I’m waiting for GMail to make a similar offer.  😉

So what Inbox boils down to is a streamlined, Get Things Done (GTD) frontend for GMail that removes email clutter, eases email management, and highlights the things that Google thinks are important. If you think Google can do that for you reasonably well, then it might make your email communication experience much saner. You might want to switch to it.  Worse that can happen is it goes away, in which case Gmail will still be there.

I have invites.  Leave a comment or ping me directly if you’d like one.

If you’re using Inbox already, tell me, has it largely replaced GMail’s frontend for you?  If so, why? If not, why not?

 

How Easy Is It For You To Manage, Analyze And Present Data?

apple-256262_640I ask because my articles are up, including my big piece from NTEN’s Collected Voices: Data-Informed Nonprofits on Architecting Healthy Data Management Systems. I’m happy to have this one available in a standalone, web-searchable format, because I think it’s a bit of a  signature work.  I consider data systems architecture to be my main talent; the most significant work that I’ve done in my career.

  • I integrated eleven databases at the law firm of Lillick & Charles in the late 90’s, using Outlook as a portal to Intranet, CRM, documents and voicemail. We had single-entry of all client and matter data that then, through SQL Server triggers, was pushed to the other databases that shared the data.  This is what I call the “holy grail” of data ,entered once by the person who cares most about it, distributed to the systems that use it, and then easily accessible by staff. No misspelled names or redundant data entry chores.
  • In the early 2000’s, at Goodwill, I developed a retail data management system on open source (MySQL and PHP, primarily) that put drill-down reporting in a web browser, updated by 6:00 am every morning with the latest sales and production data.  We were able to use this data in ways that were revolutionary for a budget-challenged Goodwill, and we saw impressive financial results.

The article lays out the approach I’m taking at Legal Services Corporation to integrate all of our grantee data into a “data portal”, built on Salesforce and Box. It’s written with the challenges that nonprofits face front and center: how to do this on a budget, and how to do it without a team of developers on staff.

At a time when, more and more, our funding depends on our ability to demonstrate our effectiveness, we need the data to be reliable, available and presentable.  This is my primer on how you get there from the IT viewpoint.

I also put up four articles from Idealware.  These are all older (2007 to 2009), they’re all still pretty relevant, although some of you might debate me on the RSS article:

This leaves only one significant piece of my nptech writing missing on the blog, and that’s my chapter in NTEN’s “Managing Technology To Meet Your Mission” book about Strategic Planning. Sorry, you gotta buy that one. However, a Powerpoint that I based on my chapter is here.

Architecting Healthy Data Management Systems

This article was originally published in the NTEN eBook “Collected Voices: Data-Informed Nonprofits” in January of 2014.

tape-403593_640Introduction

The reasons why we want to make data-driven decisions are clear.  The challenge, in our cash-strapped, resource-shy environments is to install, configure and manage the systems that will allow us to easily and efficiently analyze, report on and visualize the data.  This article will offer some insight into how that can be done, while being ever mindful that the money and time to invest is hard to come by.  But we’ll also point out where those investments can pay off in more ways than just the critical one: the ability to justify our mission-effectiveness.

Right off the bat, acknowledge that it might be a long-term project to get there.  But, acknowledge as well, that you are already collecting all sorts of data, and there is a lot more data available that can put your work in context.  The challenge is to implement new systems without wasting earlier investments, and to funnel data to a central repository for reporting, as opposed to re-entering it all into a redundant system.  Done correctly, this project should result in greater efficiency once it’s completed.

Consider these goals:

  • An integrated data management and reporting system that can easily output metrics in the formats that constituents and funders desire;
  • A streamlined process for managing data that increases the validity of the data entered while reducing the amount of data entry; and
  • A broader, shared understanding of the effectiveness of our strategic plans.

Here are the steps you can take to accomplish these goals.

Taking Inventory

The first step in building the system involves ferreting out all of the systems that you store data in today.  These will likely be applications, like case or client management systems, finance databases, human resources systems and constituent relationship management (CRM) systems.  It will also include Access databases, Excel spreadsheets, Word documents, email, and, of course, paper.  In most organizations (and this isn’t limited to nonprofits), data isn’t centrally managed.  It’s stored by application and/or department, and by individuals.

The challenge is to identify the data that you need to report on, wherever it might be hidden, and catalogue it. Write down what it is, where it is, what format it is in, and who maintains it.  Catalogue your information security: what content is subject to limited availability within the company (e.g., HR data and HIPAA-related information)? What can be seen organization-wide? What can be seen by the public?

Traditionally, companies have defaulted to securing data by department. While this offers a high-level of security, it can stifle collaboration and result in data sprawl, as copies of secured documents are printed and emailed to those who need to see the information, but don’t have access. Consider a data strategy that keeps most things public (within the organization), and only secures documents when there is clear reason to do so.

You’ll likely find a fair amount of redundant data.  This, in particular, should be catalogued.  For example, say that you work at a social services organization.  When a new client comes on, they’re entered into the case management system, the CRM, a learning management system, and a security system database, because you’ve given them some kind of access card. Key to our data management strategy is to identify redundant data entry and remove it.  We should be able to enter this client information once and have it automatically replicated in the other systems.

Systems Integration

Chances are, of course, that all of your data is not in one system, and the systems that you do have (finance, CRM, etc.) don’t easily integrate with each other.  The first question to ask is, how are we going to get all of our systems to share with each other? One approach, of course, is to replace all of your separate databases with one database.  Fortune 500 companies use products from Oracle and SAP to do this, systems that incorporate finance, HR, CRM and inventory management.  Chances are that these will not work at your nonprofit; the software is expensive and the developers that know how to customize it are, as well.  More affordable options exist from companies like MicroSoft, Salesforce, NetSuite and IBM, at special pricing for 501(c)(3)’s.

Data Platforms

A data platform is one of these systems that stores your data in a single database, but offers multiple ways of working with the data.  Accordingly, a NetSuite platform can handle your finance, HR, CRM/Donor Management and e-commerce without maintaining separate data stores, allowing you to report on combined metrics on things like fundraiser effectiveness (Donor Management and HR) and mail vs online donations (E-commerce and Donor Management).  Microsoft’s solution will incorporate separate products, such as Sharepoint, Dynamics CRM, and the Dynamics ERP applications (HR, Finance).  Solutions like Salesforce and NetSuite are cloud only, whereas Microsoft  and IBM can be installed locally or run from the cloud.

Getting from here to there

Of course, replacing all of your key systems overnight is neither a likely option nor an advisable one.  Change like this has to be implemented over a period of time, possibly spanning years (for larger organizations where the system changes will be costly and complex). As part of the earlier system evaluation, you’ll want to factor in the state of each system.  Are some approaching obsoletion?  Are some not meeting your needs? Prioritize based on the natural life of the existing systems and the particular business requirements. Replacing major data systems can be difficult and complex — the point isn’t to gloss over this.  You need to have a strong plan that factors in budget, resources, and change management.  Replacing too many systems too quickly can overwhelm both the staff implementing the change and the users of the systems being changed.  If you don’t have executive level IT Staff on board, working with consultants to accomplish this is highly recommended.

Business Process Mapping

BPM_Example

The success of the conversion is less dependent on the platform you choose than it is on the way you configure it.  Systems optimize and streamline data management; they don’t manage the data for you.  In order to insure that this investment is realized, a prerequisite investment is one in understanding how you currently work with data and optimizing those processes for the new platform.

To do this, take a look at the key reports and types of information in the list that you compiled and draw the process that produces each piece, whether it’s a report, a chart, a list of addresses or a board report.  Drawing processes, aka business process mapping, is best done with a flowcharting tool, such as Microsoft Visio.  A simple process map will look like this:

In particular, look at the processes that are being done on paper, in Word, or in Excel that would benefit from being in a database.  Aggregating information from individual documents is laborious; the goal is to store data in the data platform and make it available for combined reporting.  If today’s process involves cataloguing data in an word processing table or a spreadsheet, then you will want to identify a data platform table that will store that information in the future.

Design Considerations

Once you have catalogued your data stores and the processes in place to interact with the data, and you’ve identified the key relationships between sets of data and improved processes that reduce redundancy, improve data integrity and automate repetitive tasks, you can begin designing the data platform.  This is likely best done with consulting help from vendors who have both expertise in the platform and knowledge of your business objectives and practices.

As much as possible, try and use the built-in functionality of the platform, as opposed to custom programming.  A solid CRM like Salesforce or MS CRM will let you create custom objects that map to your data and then allow you to input, manage, and report on the data that is stored in them without resorting to actual programming in Java or .NET languages.  Once you start developing new interfaces and adding functionality that isn’t native to the platform, things become more difficult to support.  Custom training is required; developers have to be able to fully document what they’ve done, or swear that they’ll never quit, be laid off, or get hit by a bus. And you have to be sure that the data platform vendor won’t release updates that break the home-grown components.

Conclusion

The end game is to have one place where all staff working with your information can sign on and work with the data, without worrying about which version is current or where everything might have been stored.  Ideally, it will be a cloud platform that allows secure access from any internet-accessible location, with mobile apps as well as browser-based.  Further considerations might include restricted access for key constituents and integration with document management systems and business intelligence tools. But key to the effort is a systematic approach that includes a deep investment in taking stock of your needs and understanding what the system will do for you before the first keypress or mouse click occurs, and patience, so that you get it all and get it right.  It’s not an impossible dream.

 

Using RSS Tools to Feed Your Information Needs

This article was originally published at Idealware in March of 2009.

The Internet gives you access to a virtual smorgasbord of information. From the consequential to the trivial, the astonishing to the mundane, it’s all within your reach. This means you can keep up with the headlines, policies, trends, and tools that interest your nonprofit, and keep informed about what people are saying about your organization online. But the sheer volume of information can pose challenges, too: namely, how do you separate the useful data from all the rest? One way is to use RSS, which brings the information you want to you.

rss-40674_640 Many of the Web sites that interest you are syndicated. With RSS, or Really Simple Syndication, you subscribe to them, and when they’re updated, the content is delivered to you — much like a daily newspaper, except you choose the content. On the Web, you can not only get most of what the newspapers offer, but also additional, vital information that informs your organizational and mission-related strategies. You subscribe only to the articles and features that you want to read. It’s absolutely free, and the only difficult part is deciding what to do with all the time you used to spend surfing.

Since TechSoup first published RSS for Nonprofits, there has been an explosion of tools that support RSS use. There are now almost as many ways to view RSS data as there are types of information to manage. Effective use of RSS means determining how you want your information served. What kind of consumer are you? What type of tool will help you manage your information most efficiently, day in and day out? Read on to learn more.

What’s on the Menu?

You probably already check a set of information sources regularly. The first step in considering your RSS needs is to take stock of what you are already reading, and what additional sources you’d like to follow. Some of that information may already be in your browser’s lists of Bookmarks or Favorites, but consider seeking out recommendations from trusted industry sources, friends, and co-workers as well. As you review the Web sites that you’ve identified as important, check them to make sure you can subscribe to them using RSS. You can find this out by looking for “subscribe” options on the Web page itself, or for an orange or blue feed icon resembling a radio signal in the right side of your Web browser’s address bar.

Consider the whole range of information that people are providing in this format. Some examples are:

  • News feeds, from traditional news sources or other nonprofits.
  • Blogs, particularly those that might mention or inform your mission.
  • Updates from social networking sites like Facebook or MySpace (for instance, through FriendFeed).
  • Podcasts and videos.
  • Updates from your own software applications, such as notifications of edits on documents from a document management system, or interactions with a donor from your CRM. (Newer applications support this.)
  • Information from technical support forums and discussion boards.
  • All sorts of regularly updated data, such as U.S. Census information, job listings, classified ads, or even TV listings and comic strips.

 

You can get a good idea of what’s out there and what’s popular by browsing the recommendations at Yahoo! Directory oriGoogle, while a tool like PostRank can help you analyze feeds and determine which are valuable.

RSS also shines as a tool for monitoring your organization and your cause on the Web. For instance, Google Alerts lets you subscribe, for free, to RSS updates that notify you when a particular word or phrase is used on the Web. (To learn more about “listening” to what others are saying about your organization online, see We Are Media’s wiki article on online listening.)

How Hungry Are You?

Dining options abound: you can order take-out, or go out to eat; you can snack on the go, or take all your meals at home; you can pick at your food, or savor each bite. Your options for RSS reading are equally diverse, and you’ll want to think carefully about your own priorities. Before choosing the tool or tools that suit you, ask some questions about the information you plan to track.

  • How much information is it? Do you follow a few blogs that are updated weekly? Or news feeds, like the New York Times or Huffington Post, which are updated 50 to 200 times a day?
  • How intently do you need to monitor this information? Do you generally want to pore over every word of this information, or just scan for the tidbits that are relevant to you? Is it a problem if you miss some items?
  • Are you generally Web-enabled? Can you use a tool over the Internet, as opposed to one installed on your desktop?
  • Do you jump from one computer to another? Do your feeds need to be synchronized so you can access them from multiple locations?
  • Is this information disposable, or will it need to be archived? Do you read articles, perhaps email the link to a colleague, and then forget about it? Or do you want to archive items of particular interest so you can find them in the future?
  • Will you refer a lot of this information to co-workers or constituents? Would you like to be able to forward items via email, or publish favorites to a Web page?
  • Do you need mobile access to the information? Will you want to be able to see all your feeds from a smartphone, on the run?

Enjoying the Meal

Once you have a solid understanding of your information needs, it’s time to consider the type of tool that you want to use to gather your information. First, let’s look at the terminology:

  • An Article (or Item) is a bit of information, such as a news story, blog entry, job listing or podcast.
  • A Feed is a collection of articles from a single source (such as a blog or Web site).
  • An Aggregated Feed is a collection of articles from numerous feeds displayed together in one folder.

So, what RSS options are available?

Tickers

Like the “crawl” at the bottom of CNN or MSNBC television broadcasts, RSS tickers show an automatically scrolling display of the titles of articles from your RSS feeds. Tickers can be a useful way to casually view news and updates. They’re a poor choice for items that you don’t want to miss, though, as key updates might move past when you’re not paying attention.

Snackr. For a very TV-news-like experience, try Snackr, an Adobe Air application. You can load up a variety of feeds which scroll in an aggregated stream across your desktop while you work.

Gmail users can use the email browser’s Web Clips feature to create a rotating display of RSS headlines above their inbox and messages. Because Gmail is Web-based, your headlines will be available from any computer.

Web Browsers

Your current Web browser — such as Internet Explorer (IE) or Firefox — can likely act as a simple RSS reader, with varying functionality depending on the browser and browser version. Browsers can either display feeds using their built-in viewers, or associate Web pages in RSS format with an installed RSS Feed Reader (much as files ending in “.doc” are associated with Microsoft Word). Even without an installed feed reader, clicking on the link to an RSS feed will typically display the articles in a readable fashion, formatting the items attractively and adding links and search options that assist in article navigation. This works in most modern browsers (IE7 and up, Firefox 2 and up, Safari and Opera). If your browser doesn’t understand feeds, then they will display as hard-to-read, XML-formatted code.

Firefox also supports plug-ins like Wizz RSS News Reader and Sage, which integrate with the browser’s bookmarks so that you can read feeds one at a time by browsing recent entries from the bookmark menu.

Portals

Portals, like iGoogle, My Yahoo!, and Netvibes, are Web sites that provide quick access to search, email, calendars, stocks, RSS feeds, and more. The information is usually presented in a set of boxes on the page, with one box per piece of information. While each RSS feed is typically displayed in a separate box, you can show as many feeds as you like on a single page. This is a step up from a ticker or standard Web browser interface, where you can only see one feed at a time.

Email Browsers

Asmany of us spend a lot of time dealing with email, your email browser can be a convenient place to read your RSS feeds. Depending on what email browser you use, RSS feeds can often be integrated as additional folders. Each RSS feed that you subscribe to appears as a separate email folder, and each article as a message. You can’t, of course, reply to RSS articles — but you can forward and quote them, or arrange them in subfolders by topic.

If you use Microsoft Outlook or Outlook Express, the very latest versions (Vista’s Windows Mail and Outlook 2007) have built-in feed reading features. (Earlier versions of Outlook can support this through powerful, free add-ons, such as RSS Popper andAttensa.)

Mozilla’s Thunderbird email application and Yahoo! Mail also allow you to subscribe to RSS feeds. Gmail doesn’t, however, as Google assumes that you’ll use the powerful Google Reader application (discussed below) to manage your feeds.

RSS Feed Readers

Another advantage of the full-featured feed readers is that you can tag and archive important information for quick retrieval. The best ones let you easily filter out items you have already read, mark the articles that are important to you so that you can easily return to them later (kind of like TiVo for the Web), and easily change your view between individual feeds and collections of feeds.

In practice, feed readers make it very effective to quickly scan many different sources of information to filter out items that are worth reading. This is a much more efficient way to process new information on the Web than visiting sites individually, or even subscribing to them with a tool that doesn’t support aggregation, like a Web browser or portal.

Feed Readers come in two primary flavors, offline and online. Offline feed readers are Windows, Mac, or Linux applications that collect articles from your feeds when you’re online, store them on your computer, and allow you to read them at any time. Online feed readers are Web sites that store articles on the Internet, along with your history and preferences. The primary difference between an online and an offline reader is the state of synchronization. An online reader will keep track of what you’ve read, no matter what computer or device that you access it from, whereas an offline reader will only update your status on the machine that it’s installed on.

Offline feed readers, such as FeedDemon (for PCs) and Vienna (for Macs), allow you to subscribe to as many feeds as you like and keep them updated, organized and manageable. During installation, they will register as the default application for RSS links in your browser, so that subscribing to new sites is as easy as clicking on an RSS icon on a Web page and confirming that you want to subscribe to it.

Online feed readers, such as Google Reader or NewsGator, offer most of the same benefits as desktop readers. While offline readers collect mail at regular intervals and copy it to your PC, online readers store all of the feeds at their Web site, and you access them with any Web browser. This means that feeds are often updated more frequently, and you can access your account — with all your RSS feeds, markings, and settings intact — from any computer. You could be home, at the office, on a smartphone, or in an Internet cafe. The products mentioned even emulate offline use. NewsGator can be synchronized with its companion offline browser FeedDemon, and Google Reader has an offline mode supported by Google Gears.

Online Readers also provide a social aspect to feed reading. Both Google Reader and NewsGator allow you to mark and republish items that you want to share with others. NewsGator does this by letting you create your own feeds to share, while Google Reader lets you subscribe to other Google Reader users’ shared items. Google Reader also lets you tag Web pages that you find outside of Google Reader and save them to your personal and shared lists. If your team members don’t do RSS, Google has that covered as well — your shared items can also be published to a standalone Web page that others can visit. You can, of course, email articles from an offline reader, but any more sophisticated sharing will require an online reader.

For many of us, mining data on the Web isn’t a personal pursuit — we’re looking to share our research with co-workers and colleagues. This ability to not only do your own research, but share valuable content with others, ultimately results in a more refined RSS experience, as members of a given community stake their own areas of expertise and share highlights with each other.

Online browsers are less intuitive than offline ones, however, for subscribing to new feeds. While an offline browser can automatically add a feed when you click on it, online browsers will require you to take another step or two (for instance, clicking an “Add” button in your browsers’ toolbar). You’re also likely to have a more difficult time connecting to a secure feed, like a list of incoming donations from your donor database, with an online reader than you would with an offline one.

The online feed readers are moving beyond the question of “How do I manage all of my information?” to “How do I share items of value with my network?”, allowing us to not only get a handle on important news, views, and information, but to act as conduits for the valuable stuff. This adds a dimension we could call “information crowd-sourcing,” where discerning what’s important and relevant to us within the daily buffet of online information becomes a community activity.


In Summary

RSS isn’t just another Internet trend — it’s a way to conquer overload without sacrificing the information. It’s an answer to the problem that the Web created: If there’s so much information out there, how do you separate the wheat from the chaff? RSS is a straightforward solution: Pick your format, sit back, and let the information feast come to you.


Thanks to TechSoup for their financial support of this article. Marshall Kirkpatrick of ReadWriteWeb, Laura Quinn of Idealware, Thomas Taylor of the Greater Philadelphia Cultural Alliance and Marnie Webb of TechSoup Global, also contributed to this article.


Peter Campbell is the director of Information Technology at Earthjustice, a nonprofit law firm dedicated to defending the earth, and blogs about NPTech tools and strategies at Techcafeteria.com. Prior to joining Earthjustice, Peter spent seven years serving as IT Director at Goodwill Industries of San Francisco, San Mateo, and Marin Counties, and has been managing technology for non-profits and law firms for over 20 years.

XML, API, CSV, SOAP! Understanding The Alphabet Soup Of Data Exchange

This article was originally published at Idealware in October of 2007.

Let’s say you have two different software packages, and you’d like them to be able to share data. What would be involved? Can you link them so they exchange data automatically? And what do all those acronyms mean? Peter Campbell explains.

There has been a lot of talk lately about data integration, Application Programming Interfaces (APIs), and how important these are to non-profits. Much of this talk has focused on the major non-profit software packages from companies like Blackbaud, Salesforce.com, Convio, and Kintera. But what is it really about, and what does it mean to the typical org that has a donor database, a web site, and standard business applications for Finance, Human Resources and payroll? In this article, we’ll bypass all of the acronyms for a while and then put the most important ones into perspective.

The Situation

Nonprofits have technology systems, and they live and die by their ability to manage the data in those systems to effectively serve their missions. Unfortunately, however, nonprofits have a history of adopting technology without a plan for how different applications will share data. This isn’t unique to the nonprofit sector: throughout the business world, data integration is often underappreciated.
Here’s simple example: Your mid-sized NPO has five fundraising staff people that together bring in $3,000,000 in donations every year. How much more would you bring in with six fundraising people? How much less with four? If you could tie your staffing cost data to hours worked and donations received, you would have a payroll-to-revenue metric that could inform critical staffing decisions. But if the payroll data is in an entirely different database from the revenue data, they can’t be easily compared.
Similarly, donations are often tracked in both a donor database and a financial system. If you’ve ever had to explain to the board why the two systems show different dollar amounts (perhaps because finance operates on a cash basis while fund development works on accrual), you can see the value in having systems that can reconcile these differences.

How can you solve these data integration challenges? Short of buying a system that tracks every piece of data you may ever need, data exchange is the only option. This process of communicating data from one system to another could be done by a straightforward manual method, like asking a staff member to export data from one system and import it into another. Alternatively, automatic data transfers can save on staff time and prevent trouble down the road – and they don’t have to be as complex as you might think.
What does it take to make a data exchange work? What is possible with your software applications? This article explains what you’ll need to consider.

 

Components of Data Exchange

Let’s get down to the nitty-gritty. You have two applications, and you’d like to integrate them to share data in some way: to pull data from one into another, or to exchange data in both directions. What has to happen? You’ll need four key components:

  • An Initiating Action. Things don’t happen without a reason, particularly in the world of programming. Some kind of triggering action is needed to start the data interchange process. For an automatic data exchange, this is likely to be either a timed process such as a scheduler kicking off a program at 2AM every night, or a user action – for instance, a visitor clicking the Submit button on your website form.
  • A Data Format. The data to be transferred needs to be stored and transferred in some kind of logical data format – for instance, a comma delineated text file – that both systems can understand.
  • A Data Transfer Mechanism. If both applications reside on your own network, then a transfer is likely to be straightforward – perhaps you can just write a file to a location where another application can read it. But if one or both applications live offsite, you might need to develop a process that transfers the data over the internet.

Let’s look at each of these components in more detail.

 

Initiating Action

An initiating action is what starts things rolling in the data exchange process. In most cases, it would take one of three forms:

  • Human Kickoff. If you’re manually exporting and importing files, or need to run a process on a schedule that’s hard to determine in advance, regular old human intervention can start the process. An administrator might download a file, run a command line program, or click a button in an admin interface.
  • Scheduler. Many data exchanges rely on a schedule – checking for new information every day, every hour, every five minutes, or some other period. These kinds of exchanges are initiated by a scheduler application. More complex applications might have a scheduling application built-in, or might integrate with Windows Scheduler or Unix/Linux Chron commands.
  • End User Action. If you want two applications to be constantly in synch, you’ll need to try to catch updates as they happen. Typically, this is done by initiating a data exchange based on some end user action, such as a visitor clicking the Submit button on an online donation form.

 

 

Data Formats

In order to transfer data from one system to another, the systems need to have a common understanding of how the data will be formatted. In the old days, things were pretty simple: you could store data in fixed format text files, or as bits of information with standard delimiting characters, commonly called CSV for “Comma Separated Values”. Today, we have a more dynamic format called XML (eXtensible Markup Language).
An example fixed format file could be made up of three lines, each 24 characters long:

Name (20)  Gender(1)   Age(3)
Susan          f                    25
Mark             m                  37

 

A program receiving this data would have to be told the lengths and data types of each field, and programmed to receive data in that exact format.

 

“Susan”,”f”,25
“Mark”,”m”,37

CSV is easier to work with than fixed formats, because the receiving system doesn’t have to be as explicitly informed about the incoming data. CSV is almost universally supported by applications, but it poses challenges as well. What if your data has quotes and commas in it already? And as with fixed formats, the receiving system will still need to be programmed (or “mapped”) to know what type of data it’s receiving.
CSV is the de facto data format standard for one-time exports and data migration projects. However, automating CSV transfers requires additional programming – batch files or scripts that will work with a scheduling function. Newer standards, like XML, are web-based and work in browsers, allowing for a more dynamic relationship with the data sets and less external programming.
The XML format is known as a “self-describing” format, which makes it a bit harder to look at but far easier to work with. The information about the data, such as field names and types, is encoded with the data, so a receiving system that ‘speaks” XML can dynamically receive it. A simple XML file looks like this:

-<PEOPLE>
-<PERSON>
<NAME>Susan</NAME>
<GENDER>f</GENDER>
<AGE>25</AGE>
</PERSON>
-<PERSON>
<NAME>Mark</NAME>
<GENDER>m</GENDER>
<AGE>37</AGE>
</PERSON>

An XML friendly system can use the information file itself to dynamically map the data to its own database, making the process of getting a data set from one application to another far less laborious than with a CSV or fixed width file. XML is the de facto standard for transferring data over the internet.

Data Transfer Mechanisms

As we’ve talked about, an initiating action can spur an application to create a formatted. data set. However, getting that data set from one application to another requires some additional work.
If both of your applications are sitting on the same network, then this work is likely pretty minimal. One application’s export file can easily be seen and uploaded by another, or you might even be able to establish a database connection directly from one application to another. However, what if the applications are in different locations? Or if one or both are hosted by outside vendors? This is where things get interesting.
There are multiple ways to exchange data over the web. Many of them are specific to the type of web server (Apache vs. Microsoft’s IIS) or operating system (Unix vs Linux vs Microsoft) you’re using. However, two standards – called “web services” – have emerged as by far the most common methods for simple transfers: SOAP (Standard Object Access Protocol) and REST (Representational State Transfer).
Both SOAP and REST transfer data via the standard transfer protocol mechanism of the web: HTTP. To explain the difference between REST and SOAP, we’ll take a brief detour and look at HTTP itself.
HTTP is a very simple minded thing. It allows you to send data from one place to another and, optionally, receive data back. Most of it is done via the familiar Uniform Resource Identifier (URI) that is typed into the address bar of a web browser, or encoded in a link on a web page, with a format similar to:

http://www.somewhere.com?parameter1=something&parameter2=somethingelse

There are two methods built into HTTP for exchanging data: GET and POST.

  • GET exchanges data strictly through the parameters to the URL, which are always in “this equals that” pairs. It is a one-way communication method – once the information is sent to the receiving page, the web server doesn’t retain the parameter data or do anything else with it.
  • POST stores the transferred information in a packet that is sent along with the URI – you don’t see the information attached to the URI in the address bar. Post values can be altered by the receiving page and returned. In almost any situation where you’re creating an account on a web page or doing a shopping transaction, POST is used.

The advantage to GET is that it’s very simple and easy to share. The advantages to POST are that it is more flexible and more secure. You can put a GET URI in any link, on or offline, while a POST transfer has to be initiated via an HTML Form.
However, add to the mix that Microsoft was one of the principal developers of the SOAP specification, and most Microsoft applications require that you use SOAP to transfer data. REST might be more appealing if you only need to do a simple data exchange, but if you’re working with Microsoft servers or applications, it is likely not an option.

Transformation and Validation Processes

While this article is focused on the mechanics of extracting and moving data, it’s important not to lose sight of the fact that data often needs a lot of work before it should be loaded into another system. Automated data exchange processes need to be designed with extreme care, as it’s quite possible to trash an entire application by corrupting data, introducing errors, or flooding the system with duplicates.
In order to get the data ready for upload, use transformation and validation processes. These processes could be kicked off either before or after the data transfer, or multiple processes could even take place at different points in time. An automated process could be written in almost any programming language, depending on the requirements of your target applications and your technical environment.

 

  • Converting file formats. Often, one application will export a data file with a particular layout of columns and field names, while the destination application will demand another.
  • Preventing duplicates. Before loading in a new record, it’s important to ensure that it doesn’t already exist in the destination application.
  • Backup and logging. It’s likely a good idea to kickoff a backup of your destination database before importing the data, or at least to log what you’ve changed.
  • User interface. For complex processes, it can be very useful to provide an administrative interface that allows someone to review what data will change and resolve errors prior to the import
  • Additional Data Mining. If you’re writing a process that analyzes data, adding routines that flag unusual occurrences for review can be very useful. Or if you’re uploading donation data that also has to go to Finance, why not concurrently save that into a CSV file that Finance can import into their system? There are plenty of organizational efficiencies that can be folded into this process.

As described in the API section below, a sophisticated application may provide considerable functionality that will help in these processes.

Application Programming Interfaces (APIs)

What about APIs? How do they fit in? We’re hundreds of words into this article without even a mention of them – how can that be? Well, APIs are a fuzzy concept that might encompass all the aspects of data exchange we just discussed, or some of them, or none of them at all. Clear as mud, right?
An API is exactly what it says – an interface, or set of instructions, for interacting with an application via a programming language.
Originally, APIs were built so that third party developers could create integrating functions more easily. For instance, a phone system vendor might write specific functions into their operating system so that a programmer for a voice mail company could easily import, extract, and otherwise work with the phone system data. This would usually be written in the same programming logic as the operating system, and the assumption was that the third party programmer knew that language. Operating systems like Unix and Windows have long had APIs, allowing third parties to develop hardware drivers and business applications that use OS functions, such as Windows’ file/open dialog boxes.

 

APIs are written to support one or more programming languages – such as PHP or Java – and require a programmer skilled in one of these languages. An API is also likely to be geared around specific data format and transfer standards – for instance, it may only accept data in a particularly XML format, and only via a SOAP interface. In most cases, you’ll be limited to working with the supported standards for that API.

 

Choose Your Own Data Exchange Adventure

The type of data exchange that makes sense and how complex it will be varies widely. A number of factors come into play: the applications you would like to integrate, the available tools, the location of the data, and the platform (i.e Windows, Linux, web) you’re using. Integration methods vary widely. For instance:

  • Striped all the way down to the basics, manual data exchange is always an option. In this case, an administrator (a Human Kickoff initiating action) might download a file into CSV, save it to the network, perform some manual transformations to put it into the appropriate file format, and upload it into a different system.
  • For two applications on the same network, the process might not be too much more complex. In this case, a Scheduler initiating action might prompt one application to export a set of data as a CSV file and save it to a network drive. A transformation program might then manipulate the file and tell the destination application to upload the new data.
  • Many web-based tools offer simple ways to extract data. For instance, to get your blog’s statistics from the popular tracking service FeedBurner, you could use a scheduled initiating action to simply request a FeedBurner page via HTTP, which would then provide you the statistics on a XML page. Your program could then parse and transform the data in order to load into your own reporting application or show it on your own website. Many public applications, such as GoogleMaps, offer similarly easy functionality to allow you to interact with them, leading to the popularity of Mashups- applications that pull data (generally via APIs) from two or more website.
  • If you are using a website Content Management System which is separate from your main constituent management system, you may find yourself with two silos containing constituent data – members who enrolled on your web site and donors tracked in a donor database. In this circumstance, you might setup a process that kicks off whenever someone submits the Become a Member form. This process could write the data for the new member into an XML file, transfer that file to your server, and there kickoff a new process that import the new members while checking for duplicates.

Finding Data-Exchange-Friendly Software

As is likely clear by now, the methods you can use to exchange data depend enormously on the software packages that you chose. The average inclination when evaluating software is to look for the features that you require. That’s an important step in the process, but it’s only half of the evaluation. It’s also critical to determine how you can – or if you can – access the data. Buying into systems that overcomplicate or restrict this access will limit your ability to manage your business.
Repeat this mantra: I will not pay a vendor to lock me out of my own data. Sadly, this is what a lot of data management systems do, either by maintaining poor reporting and exporting interfaces; or by including license clauses that void the contract if you try to interact with your data in unapproved ways (including leaving the vendor).
To avoid lock-in and ensure the greatest amount of flexibility when looking to buy any new application – particularly the ones that store your data off-site and give you web-based access to it – ask the following questions:

  • Can I do mass imports and updates on my data? If the vendor doesn’t allow you to add or update the system in bulk with data from other systems, or their warrantee prohibits mass updates, then you will have difficulty smoothly integrating data into this system.
  • Can I take a report or export file; make a simple change to it, and save my changes? The majority of customized formats are small variations on the standard formats that come with a system. But it’s shocking how many web-based platforms don’t allow you to save your modifications.
  • Can I create the complex data views that are useful to me? Most modern donor, client/case management and other databases are relational. They store data in separate tables. That’s good – it allows these systems to be powerful and dynamic. But it complicates the process of extracting data and creating customized reports. A donor’s name, address, and amount that they have donated might be stored in three different, but related tables. If that’s the case, and your reporting or export interface doesn’t allow you to report on multiple tables in one report, then you won’t be able to do a report that extracts names and addresses of all donors who contributed a certain amount or more. You don’t want to come up with a need for information and find that, although you’ve input all the data, you can’t get it out of the system in a useful fashion.
  • Does the vendor provide a data dictionary? A data dictionary is a chart identifying exactly how the database is laid out. If you don’t have this, and you don’t have ways of mapping the database, you will again be very limited in reporting on and extracting data from the application.
  • What data formats can I export data to? As discussed, there are a number of formats that data can be stored in. You want a variety of options for industry standard formats.
  • Can I connect to the database itself? Particularly if the application is installed on your own local network, you might be access the database directly. The ability to establish an ODBC connection to the data, for instance, can provide a comparatively easy way to extract or update data. Consider, however, what will happen to your interface if the vendor upgrades the database structure.
  • Can I initiate data exports without human intervention? Check to see if there are ways to schedule exports, using built-in scheduling features or by saving queries that can be run by the Windows Scheduler (or something similar). If you want to integrate data in real time, determine what user actions you can use to kick off a process. Don’t allow a vendor to lock you out of the database administrator functions for a system installed on your own network.
  • Is there an API? APIs can save a lot of time if you’re building a complex data exchange. For some systems, it may be the only way to get data in or out without human intervention. Don’t assume any API is a good API, however – make sure it has the functions that will be useful to you.
  • Is there a data exchange ecosystem? Are there consultants who have experience working with the software? Does the software support third party packages that specialize in extracting data from one system, transforming it, and loading it into another? Is there an active community developing add-ons and extensions to the application that might serve some of your needs?

Back to Reality

So, again, what does all of this really mean to a nonprofit organization? From a historical perspective, it means that despite the preponderance of acronyms and the lingering frustrations of some companies limiting their options, integration has gotten easier and better. If you picked up this article thinking that integrating and migrating data between applications and web sites is extremely complex, well, it isn’t, necessarily – it’s sometimes as simple as typing a line in your browser’s address bar. But it all depends on the complexity of the data that you’re working with, and the tools that your software application gives you to manage that data.

 

For More Information

An Introduction to Integrating Constituent Data: Three Basic Approaches
A higher level, less technical look at data integration options

The SOAP/XML-RPC/REST Saga
A blog article articulating the differences – from a more technical perspective – between REST and SOAP.

Mashup Tools for Consumers
New York Times article on the Mashup phenomenon

W3 Hardcore Data Standard Definition
W3, the standards body for the internet. The hardcore references for HTTP, XML, SOAP, REST and other things mentioned here.

Web API List
Techmagazine’s recent article linking to literally hundreds of applications that have popular Web APIs

Peter Campbell is currently the Director of Information Technology at Earthjustice, an non-profit law firm dedicated to defending the earth. Prior to joining Earthjustice, Peter spent seven years serving as IT Director at Goodwill Industries of San Francisco, San Mateo & Marin Counties, Inc. Peter has been managing technology for non-profits and law firms for over 20 years, and has a broad knowledge of systems, email and the web. In 2003, he won a “Top Technology Innovator” award from InfoWorld for developing a retail reporting system for Goodwill thrift. Peter’s focus is on advancing communication, collaboration and efficiency through creative use of the web and other technology platforms. In addition to his work at SF Goodwill, Peter maintains a number of personal and non-profit web sites; blogs on NPTech tools and strategies at http://techcafeteria.com; is active in the non-profit community as member of NTEN; and spends as much quality time as possible with his wife, Linda, and eight year old son, Ethan.

Steve Anderson of ONE/ Northwest, Steven Backman of Design Database Associates, Paul Hagen of Hagen20/20, Brett Meyer of NTEN, and Laura Quinn of Idealware also contributed to this article

Better Organization Through Document Management Systems

This article was originally published at Idealware in January of 2007.

Is your organization drowning in a virtual sea of documents? Document management systems can provide invaluable document searching, versioning, comparison, and collaboration features. Peter Campbell explains.

tax-468440_640For many of us, logging on to a network or the Internet can be like charting the ocean with a rowboat. There may be a sea of information at our fingertips, but if we lack the proper vessel to navigate it, finding what we need — even within our own organization’s information system — can be a significant challenge.

Organizations today are floating in a virtual sea of documents. Once upon a time, this ocean was limited to computer files and printed documents, but these days we must also keep track of the information we email, broadcast, publish online, collaborate on, compare, and present — as well as the related content that others send us. Regulatory measures like the Sarbanes-Oxley actand the Health Insurance Portability and Accountability Act (HIPAA) have created a further burden on organizations to produce more documents and track them more methodically.Taken as a whole, this flood of created and related content acts as our nonprofit’s knowledge base. Yet when we simply create and collect documents, we miss the opportunity to take advantage of this knowledge. Not only do these documents contain information we can reuse, we can also study them to understand past organizational decisions and parse them to produce metrics on organizational goals and efficiencies.

Just as effective document management has become an increasing priority for large companies, it has also become more important — and viable — at smaller nonprofits. And while free tools like Google Desktop or Windows Desktop Search can help increase your document-management efficiency, more sophisticated and secure document-management tools — called Document Management Systems (DMSs) — are likely within your reach. Document management systems offer integrated features to support Google-esque searching, document versioning, comparison, and collaboration. What’s more, when you save a document to a DMS, you record summary information about your document to a database. That database can then be used to analyze your work in order to improve your organization’s efficiency and effectiveness.

Basic Document Management

One way to increase the overall efficiency of your document management is simply to use your existing file-system tools in an agreed upon, standardized fashion. For instance, naming a document “Jones Fax 05-13-08.doc” instead of “Jones.doc” is a rudimentary form of document management. By including the document type (or other descriptive data) your document will be easier to locate when you’re looking for the fax that you sent to Jones on May 13, as opposed to other erstwhile “Jones” correspondence. Arranging documents on a computer or file server in standard subfolders, with meaningful names and topics, can also be useful when managing documents.

For small organizations with a manageable level of document output, these basic document-storing techniques may suffice, especially if all document editors understand the conventions and stick by them. But this kind of process can be difficult to impose and enforce effectively, especially if your organization juggles thousands of documents. If you find that conventions alone aren’t working, you may wish to turn to a Document Management System.

One huge advantage of this system is that it names and stores your documents using a standardized, organization-wide convention, something that can be difficult to maintain otherwise, especially given a typical nonprofit’s turnover rate and dependence on volunteers. What’s more, a DMS will track not just the date the file was last modified (as Windows does), but also the date the document was originally created — which is often more useful in finding a particular document.

In fact, a DMS’s “File > Open” dialogue box can locate files based on any of the information saved about a document. A DMS can narrow a search by date range, locate documents created by particular authors, or browse through recently modified documents, sparing you the necessity of clicking through multiple folders to find what you’re looking for. It will also allow you to search the content of documents using a variety of methods, including the Boolean system (e.g. “includes Word A OR Word B but NOT Word C”) and proximity criteria (e.g., “Word A and word B within n words of each other”). Just as Google has become the quickest way to pull Web-page “needles” out of a gigantic Internet haystack, a solid DMS allows you to quickly find what you’re looking for on your own network.

A good DMS also allows the document creator to define which co-workers can read, edit, or delete his or her work via the document profile. On most networks, this type of document protection is handled by network access rules, and making exceptions to them requires a call to the help desk for assistance.

  • Document check-in and check-out.

    If you try to open a file that someone else is already editing, a network operating system, like Windows Server 2003, will alert you that the file is in use and offer you the option to make a copy. A DMS will tell you more: who is editing the document, what time she checked it out, and the information she provided about the purpose of her revision and when she plans to be done with the document.

  • Document comparison.

    A DMS not only supports Word’s track-changes and document-merging features, but allows you to compare your edited document to an unedited version, highlighting the differences between the two within the DMS. This is a great feature when your collaborator has neglected to track his or her changes, particularly because it allows you to view the updates without actually adding the revision data to your original files, making them less susceptible to document corruption.

  • Web publishing.

    Most DMSs provide content-management features for intranets and even public Web sites. Often, you can define that specific types of documents should be automatically published to your intranet as soon as they’re saved to the DMS. (Note, however, that if your core need is to publish documents on a Web site, rather than track versions or support check-ins and check-outs, a dedicated Content Management System [CMS] will likely be a better fit than a DMS.)

  • Workflow automation.

    A DMS can incorporate approvals and routing rules to define who should see the document and in what order. This allows the system to support not only the creation and retrieval of documents, but also the editing and handoff process. For example, when multiple authors need to work on a single document, the DMS can route the file from one to the next in a pre-defined order.

  • Email Integration.

    Most DMSs integrate with Microsoft Outlook, Lotus Notes, and other email platforms, allowing you to not only view your document folders from within your email client, but to also to save emails to your DMS. If, for example, you send out a document for review, you can associate any feedback and comments you receive via email with that document, which you can retrieve whenever you search for your original file.

  • Document Recovery.

    DMSs typically provide strong support for document backup, archiving, and disaster recovery, working in conjunction with your other backup systems to safeguard your work.

Three Types of Document Management Systems

If you decide that your organization would benefit from a DMS, there are a variety of choices and prices available. In general, we can break up DMSs into three types:

  • Photocopier- and Scanner-Bundled Systems

    Affordable DMS systems are often resold along with photocopiers and
    scanners. While primarily intended as an image and PDF management
    system, these DMSs integrate with the hardware but can also manage files created on the network. Bundled systems may not include the very high-end features features offered by enterprise-level DMSs, but will offer the basics and usually come with very competitive, tiered pricing. A popular software package is offered by Laserfiche.

  • Enterprise-Level Systems

    These robust, sophisticated systems usually require a strong database
    back end such as Microsoft SQL or Oracle and tend to be expensive.
    Enterprise-level systems include the advanced features listed above, and some are even tailored to particular industries, such as legal or
    accounting firms. Examples of powerful enterprise systems include Open
    Text eDocs, Interwoven WorkSite, and EMC’s Documentum.

  • Microsoft Office SharePoint (MOSS 2007)

    Microsoft SharePoint is an interesting and fairly unique offering in the DMS area. While it’s best know as a corporate intranet platform, the 2007 version of the package provides building blocks for content-, document-, and knowledge-management, with tight integration with Microsoft Office documents, sophisticated workflow and routing features, and extensive document and people-searching capabilities. It is a powerful tool and — typically — an expensive one, but because it is available to qualifying nonprofits for a low administrative free through TechSoup (which offers both SharePoint Standard Edition andEnterprise Edition), it is also a far more affordable option for nonprofits than similar DMS products on the market. One caveat: Sharepoint, unlike the other systems mentioned above, stores documents in a database rather than in your file system, which can make the documents more susceptible to corruption. (Note: SharePoint Server is a discreet product that should not be confused with Windows Shared Services, which comes bundled with Windows Server 2003.

The Future of Document Management

The most significant changes in document management over the last decade have been the migration of most major DMS systems from desktop to browser-based applications, as well as their ever-increasing efficiency and search functionality. The growing popularity of Software as a Service (SaaS), tagging, and RSS tools are likely to impact the DMS space as well.

Software as a Service

SaaS platforms like Google Apps and Salesforce.com store documents online, on hosted servers, as opposed to on traditional internal file servers. Google Apps doesn’t currently offer the detailed document profile options standard DMSs do, but it will be interesting to see how that platform evolves.

Another SaaS product, Salesforce, has been active in the document management space. Salesforce’s constituent relationship management (CRM) platform currently allows organizations to simply upload documents for a constituent. Salesforce has recently purchased a commercial DMS called Koral, however, and is in the process of incorporating it into its platform, an enhancement that will help tie documents to the other aspects of constituent relationships.

Tagging

A startup called Wonderfile has introduced an online DMS that incorporates the heavy use of tagging to identify and describe documents. Using this software, you would move your documents to the Wonderfile servers and manage them online with Del.icio.us-style methods of tagging and browsing. A drawback to Wonderfile is that, although a creative solution, storing and sharing your documents online is more valuable when you can edit and collaborate on them as well. As full-fledged, Web-based document creation and editing platforms, Google Apps and its peers are a better alternative, despite their lack of tagging functionality.

Microsoft has also been quietly adding tagging capability to their file-browsing utility Windows Explorer, allowing to you add keywords to your documents that show up as columns that you can sort and filter by. This works in both Windows XP and Vista.

RSS

While none of the existing DMSs are currently doing much with RSS — an online syndication technique that could allow users to “subscribe” to changes to documents or new content via a Web browser — Salesforce plans to integrate RSS functionality with its new Koral system. This type of syndication could be a useful feature, allowing groups of people to track document revisions, communicate about modifications, or monitor additions to folders.

Finding What You’re Looking For

Is it time for your organization to trade in that rowboat for a battle cruiser? With an ever-expanding pool of documents and resources, nonprofits need ways to find the information we need that are richer and more sophisticated than the standard filenames and folders. If your organization struggles to keep track of important documents and information, a DMS can help you move beyond the traditional “file-and-save” method to an organizational system that allows you to sort by topics and projects using a variety of techniques and criteria.

But we should all hope that even better navigational systems are coming down the road. Having seen the creative advances in information management provided by Web 2.0 features like tagging and syndication, it’s easy to envision how these features, which work well with photos, bookmarks, and blog entries, could be extended to documents as well.

 

Peter Campbell is the director of Information Technology at Earthjustice, a nonprofit law firm dedicated to defending the earth, and blogs about NPTech tools and strategies at Techcafeteria.com. Prior to joining Earthjustice, Peter spent seven years serving as IT Director at Goodwill Industries of San Francisco, San Mateo, and Marin Counties, and has been managing technology for non-profits and law firms for over 20 years.

Thanks to TechSoup for their financial support of this article. Tim Johnson, Laura Quinn of Idealware, and Peter Crosby ofalltogethernow also contributed to this article.