Tag Archives: collaboration

Swept Up in a Google Wave

This article was originally published on the Idealware Blog in September of 2009.

mailbox.jpg
Photo by Mrjoro.

Last week, I shared my impressions of Google Wave, which takes current web 2.0/Internet staple technologies like email, messaging, document collaboration, widgets/gadgets and extranets and mashes them up into an open communications standard that, if it lives up to Google’s aspirations, will supersede email.  There is little doubt in my mind that this is how the web will evolve.  We’ve gone from:

  • The Yahoo! Directory model – a bunch of static web sites that can be cataloged and explored like chapters in a book, to
  • The Google needle/haystack approach – the web as a repository of data that can be mined with a proper query, to
  • Web 2.0, a referral-based model that mixes human opinion and interaction into the navigation system.

For many of us, we no longer browse, and we search less than we used to, because the data that we’re looking for is either coming to us through readers and portals where we subscribe to it, or it’s being referred to us by our friends and co-workers on social networks.  Much of what we refer to each other is content that we have created. The web is as much an application as it is a library now.

Google Wave might well be “Web 3.0“, the step that breaks down the location-based structure of web data and replaces it completely with a social structure.  Data isn’t stored as much as it is shared.  You don’t browse to sites; you share, enhance, append, create and communicate about web content in individual waves.  Servers are sources, not destinations in the new paradigm.

Looking at Wave in light of Google’s mission and strategy supports this idea. Google wants to catalog, and make accessible, all of the world’s information. Wave has a data mining and reporting feature called “robots”. Robots are database agents that lurk in a wave, monitoring all activity, and then pop in as warranted when certain terms or actions trigger their response.  The example I saw was of a nurse reporting in the wave that they’re going to give patient “John Doe” a peanut butter sandwich.  The robot has access to Doe’s medical record, is aware of a peanut allergy, and pops in with a warning. Powerful stuff! But the underlying data source for Joe’s medical record was Google Health. For many, health information is too valuable and easily abused to be trusted to Google, Yahoo!, or any online provider. The Wave security module that I saw hid some data from Wave participants, but was based upon the time that the person joined the Wave, not ongoing record level permissions.

This doesn’t invalidate the use of Wave, by any means — a wave that is housed on the Doctor’s office server, and restricted to Doctor, Nurse and patient could enable those benefits securely. But as the easily recognizable lines between cloud computing and private applications; email and online community; shared documents and public records continue to blur, we need to be careful, and make sure that the learning curve that accompanies these web evolutions is tended to. After all, the worst public/private mistakes on the internet have generally involved someone “replying to all” when they didn’t mean to. If it’s that easy to forget who you’re talking to in an email, how are we going to consciously track what we’re revealing to whom in a wave, particularly when that wave has automatons popping data into the conversation as well?

The Wave as internet evolution idea supports a favored notion: data wants to be free. Open data advocates (like myself) are looking for interfaces that enable that access, and Wave’s combination of creation and communication, facilitated by simple, but powerful data mining agents, is a powerful frontend.  If it truly winds up as easy as email, which is, after all, the application that enticed our grandparents to sue the net, then it has culture-changing potential.  It will need to bring the users along for that ride, though, and it will be interesting to see how that goes.

——–

A few more interesting Google Wave stories popped up while I was drafting this one. Mashable’s Google Wave: 5 Ways It Could Change the Web gives some concrete examples to some of the ideas I floated last week; and, for those of you lucky enough to have access to Wave, here’s a tutorial on how to build a robot.

Beta Google Wave accounts can be requested at the Wave website.  They will be handing out a lot more of them at the end of September, and they are taking requests to add them to any Google Domains (although the timeframe for granting the requests is still a long one).

Is Google Wave a Tidal Wave?

This post originally appeared on the Idealware Blog in September of 2009.

800px-Hokusai21_great-wave.jpg
“The Great Wave off Kanagawa” by Katsushika Hokusai (1760-1849).

Google is on a fishing expedition to see if we’re willing to take web-surfing to a whole new level.  My colleague Steve Backman introduced us to Google Wave a few months ago. I attended a developer’s preview at Techsoup Headquarters last week, and I have some additional thoughts to share.

Google’s introduction of Wave is nothing if not ambitious.  As opposed to saying “We have a new web mashup tool” or “We’ve taken multimedia email to a new level”, they’re pitching Wave as nothing less than the successor to email.  My question, after seeing the demo, is “Is that an outrageous claim, or a way too modest one?”.

The early version of Google Wave I saw looked a lot like Gmail, with a folder list on the left and “wave” list next to it. Unlike Gmail, a third pane to the right included an area where you can compose waves, so Wave is three-columner to Gmail’s two.

A wave is a collaborative document that can be updated by numerous people in real-time.  This means that, if we’re both working in the same wave, you can see what I’m typing, letter by letter, as I can see what you add. This makes Twitter seem like the new snail mail. It’s a pretty powerful step for collaborative technology. But it’s also quite a cultural change for those of us who appreciate computer-based communications for the incorporated spell-check and the ability to edit and finalize drafted messages before we send them.

Waves can include text, photos, film clips, forms, and any active content that could go into a Google Gadget. If you check out iGoogle, Google’s personal portal page, you can see the wide assortment of gadgets that are available and imagine how you would use them — or things like them — in a collaborative document. News feeds, polls, games, utilities, and the list goes on.

You share waves with any other wave users that you choose to share with.  User-level security is being written into the platform, so that you can share waves as read-only or only share certain content in waves with particular people.

Given these two tidbits, it occurred to me that each wave was far more like a little Extranet than an email message. This is why I think Google’s being kind of coy when they call it an email killer – it’s a Sharepoint killer.  It’s possibly a Drupal (or fill in your favorite CMS here) killer.  It’s certainly an evolution of Google Apps, with pretty much all of that functionality rolled into a model that, instead of saying “I have a document, spreadsheet or website to share” says “I want to share, and, once we’re sharing, we can share websites, spreadsheets, documents and whatever”.  Put another way, Google Apps is an information management tool with some collaborative and communication features.  Google Wave is a communications platform with a rich set of information management tools. It’s Google Docs inverted.

So, Google Wave has the potential to be very disruptive technology, as long as people:

  • Adopt it;
  • Feel comfortable with it; and
  • Trust Google.

Next week, I’ll spend a little time on the gotcha’s – please add your thoughts and concerns in the comments.

Evaluating Wikis

This post originally appeared on the Idealware Blog in August of 2009.

I’m following up on my post suggesting that Wikis should be grabbing a portion of the market from word processors. Wikis are convenient collaborative editing platforms that remove a lot of the legacy awkwardness that traditional editing software brings to writing for the web.  Gone are useless print formatting functions like pagination and margins; huge file sizes; and the need to email around multiple versions of the same document.

There are a lot of use cases for Wikis:

  • We can all thank Wikipedia for bringing the excellent crowd-sourced knowledgebase functionality to broad attention.  Closer to home we can see great use of this at the We Are Media Wiki, where NTEN and friends share best practices around social media and nonprofits.
  • Collaborative authoring is another natural use, illustrated beautifully by the Floss Manuals project.
  • Project Management and Development are regularly handled by Wikis, such as the Fedora Project
  • Wikis make great directories for other media, such as Project Gutenburg‘s catalogue of free E-Books.
  • A growing trend is use of a Wiki as a company Intranet.

Almost any popular Wiki software will support the basic functionality of providing user-editable web pages with some formatting capability and a method (such as “CamelCase“) to signify text that should be a link.  But Wikis have been exploding with additional functionality that ramps up their suitability for all sorts of tasks:

  • The Floss Manuals team wrote extensions for the Open Source TWiki platform that track who is working on which section of a book and send out updates.
  • TWiki, along with Confluence, SocialText and other platforms, include (either natively or via an optional plugin) tabular data — spreadsheet like pages for tracking lists and numeric information. This can really beef up the value of a Wiki as an Intranet or Project Management application.
  • TWiki and others include built-in form generators, allowing you to better track information and interact with Wiki users.
  • And, of course, the more advanced Wikis are building in social networking features.  Most Wikis support RSS, allowing you to subscribe to page revisions. But newer platforms are adding status updates and Twitter-like functionality.
  • Before choosing a Wiki platform, ask yourself some key questions:
  • Do you need granular security? Advanced Wikis have full-blown user and group-based security and authentication features, much like a standard CMS.
  • Should the data be stored in a database? It might be useful or even critical for integration with other systems.
  • Does it belong on a local server, or in the cloud? There are plenty of great hosted Wikis, like PBWorks (formerly PBWiki) and WikiSpaces, in addition to all of the Wikis that you can download and install on your own Server.  There are even personal Wikis like TiddlyWiki and ZuluPad.  I use a Wiki on my Android phone called WikiNotes for my note-keeping.

Are you already using a Wiki?  You might be. Google Docs, with it’s revision history feature, may look more like a Word processor, but it’s a Wiki at heart.

Word or Wiki?

This post was originally published on the Idealware Blog in August of 2009.

An award-winning friend of mine at NTEN referred me to this article, by Jeremy Reimer, suggesting that Word, the ubiquitous Microsoft text manipulation application, has gone the way of the dinosaur.  The “boil it down” quote:

“Word was designed in a different era, for a very specific purpose. We don’t work that way anymore.”

Reimer’s primary reasoning is that Word was originally developed as a tool that prepares text for printing. Since we now do far more sharing online than by paper, formatting is less important. He also points out that Word files are unwieldy in size, due to the need to support so many advanced but not widely used features. He correctly points out that wikis save every edit, allowing for easy recovery and collaboration. Word’s difficult to read and use Track Changes feature is the closest equivalent

Now, I might have a reputation here as a Microsoft basher, but, the truth is, Word holds a treasured spot on my Mac’s Dock. Attempts to unseat it by Apple’s Pages, Google Docs and Open Office have been short-lived and fruitless. But Reimer’s absolutely right — I use Word far more for compatibility’s sake than the feature set.  There are times – particularly when I’m working on an article with an editor – that the granular Track Changes readout fits the bill better than a wiki’s revision history, because I’m interested in seeing every small grammatical correction.  And there are other times when the templates and automation bring specific convenience to a task, such as when I’m doing a formal memo or printing letterhead at work.  But, for the bulk of writing that I do now, which is intended for sharing on the web, Wikis put Word to shame.

The biggest problem with Word (and its ilk) is that documents can only be jointly edited when that’s facilitated by desktop sharing tools, such as GoToMeeting or ReadyTalk, and now Skype. In most cases, collaboration with Word docs involves multiple copies of the same document being edited concurrently by different people on different computers.  This creates logistical problems when it comes time to merge edits.  It also results in multiple copies of the revised documents on multiple computers and in assorted email inboxes. And, don’t forget that Track Changes use results in larger documents that are more easily corrupted.

A wiki document is just a web page on a server that anyone who is authorized to do so can modify.  Multiple people can edit a wiki concurrently, or they can edit on their own schedules.  The better wiki platforms handle editing conflicts gracefully. Every revision is saved, allowing for an easy review of all changes.  Earlier versions are simple to revert back to.  This doesn’t have to be cloud computing — the wiki can live on a network server, just as most Word documents do.

But it’s more than just the collaborative edge.  Wikis are casual and easy.  Find the page, click “edit”, go to work.  Pagination isn’t an issue. Everything that you can do is usually in a toolbar above the text, and that’s everything that you’d want to do as well.

So when the goal is meeting notes, agendas, documentation, project planning or brainstorming, a wiki might be a far simpler way to meet the need than emailing a Word document around. Word can be dusted off for the printed reports and serious writing projects. In the information age, it appears that the wiki is mightier than the Word.

Next week I’ll follow up with more talk about wikis and how they can meet organizational needs.

Google Reader Reaches Out

This article first appeared on the Idealware Blog in July of 2009.

As the internet has progressed from a shared source of information to a primary communications tool, a natural offshoot of the migration has been where the two things meet: people referring internet information.  If you’re active at all on Facebook, Twitter, MySpace, Friendfeed, or any of the numerous online communities, big or small, then you are regularly seeing links to useful articles and blog posts; cute YouTube videos, and entertaining photos.  Much of this information is passed along from online friend to online friend, but where does the first referral originate from? Usually, it’s somebody’s RSS reader.

The main reason that I’m such an RSS advocate is that I believe that it’s the tool that lets me find the strategic and useful needles lost in the haystack of celebrity gossip, prurient content, and corporate promotional materials that they’re buried under. But it isn’t “RSS”, per se, that does the filtering — it’s other people, whom I call “information agents”, who do the sifting.  If I want to keep up with fundraising trends, a topic that interests me, but, as an IT Director, isn’t my primary area of expertise, I’m not going to spend thirty minutes a day doing research.  I subscribe to some very pertinent blogs, and I follow a few people on Twitter and in Reader who find the important and insightful articles and share them with me.

Now it appears that Google wants to cut out the social media middle people.  As I alluded to in my article on RSS, and fleshed out in this post about sharing with reader, the ability to refer information that you find in Reader is one of the things that makes it so powerful.  Last week, Google seriously upped the ante by adding Twitter/Facebook/Delicious-like following, “liking” and sharing to the mix.

Here’s what the new features do:

Sharing now lets you share with the world, or just those members of the world that you want to share with.  Google has always allowed you to share items, but connecting to other people was a bit arcane and limited, as, by default, Google only allowed you to connect to those that you chat with in GMail.  If you read up on it, you learned that you could change that to any defined group of associates in your Google Contacts (all of this assuming that you use Google Contacts – many Google Reader users don’t).  As someone who does use all of the Google stuff, I still found that opening this up to 80 or so people in my contacts didn’t make it clear to many of them as to how they could connect with me.

The new Following feature lets you follow anyone who is willing to share, not just people that you personally communicate with. Now my shared items are marked as public, so anyone can follow my shared items feed by clicking on “Sharing Settings” (in the “People You Follow” section) and searching for me by name or email address.  Once you locate me (or someone else), you can (and should) browse through their items to make sure that they share things that you’ll find useful.  For example, I share a lot of things that are on the topics that I blog about here.  But I also share items related to civil rights issues and the occasional link that I find funny. Since humor and politics are very subjective topics, you might want to be sure that you’re not going to be annoyed or offended you before you subscribe to a feed.

But the internet is not just about who you know. The Like feature allows you to find new people to follow based on common interests.  You’ll note that certain articles have a new note at the top saying “XX people liked this”, where “XX” is the number of people who have indicated that they like the article by checking the option at the bottom of the post.  This message is a link, and clicking it expands it into links to each of the people who “liked” it, allowing you to browse their shared items and optionally follow them.  This, to me, enables the real power of the social web — finding people who share your interests, but have better sources.  It’s what initially was so exciting about social bookmarking service Delicious, and it’s about time that Google Reader enabled it.

I’m hoping the Google’s next round of Reader updates will improve our ability to not just tag and classify the information that we find, but also share based on those classifications.  That will enable me to selectively publish items that I think are of interest to others, perhaps sending nptech links to Friendfeed and the humorous stuff to Facebook.  But I welcome these improvements, and I appreciate the way that reader becomes more and more of a single stop for information discovery and distribution. The Internet would be a messier place without it.

Why SharePoint Scares Me

This post was originally published on the Idealware Blog in July of 2009.

For the past four years or so, at two different organizations, I’ve been evaluating Microsoft’s Sharepoint 2007 as a Portal/Intranet/Business Process Management solution.  It’s a hard thing to ignore, for numerous reasons:

  • It’s an instant, interactive content, document and data management interface out of the box, with strong interactive capabilities and hooks to integrate other databases. If you get the way it uses lists and views to organize and display data, it can be a very powerful tool for managing and collaborating on all sorts of content.  As I said a year or two ago in an article on document management systems, it has virtually all of the functionality that the expensive, commercial products do, and they aren’t full-fledged portals and Intranet sites as well.
  • Sharepoint 2007 (aka MOSS) is not free, but I can pick it up via Techsoup for a song.
  • It integrates with Microsoft Exchange and Office, to some extent, as well as my Windows Directory, so, as I oversee a Windows network, it fits into it without having to fuss with tricky LDAP and SMTP integrations.
  • All pretty compelling, and I’m not alone — from the nonprofit CIO and IT Director lists I’m on, I see that lots of other mid to large-sized organizations are either considering Sharepoint, or already well-ensconced.

So, why does Sharepoint scare me?

  • What it does out of the box, it does reasonably well.  Not a great or intuitive UI, but it’s pretty powerful. However, advanced programming and integration with legacy systems can get really complicated very fast.  It is not a well-designed database, and integration is based on SOAP, not the far less complicated REST standard, meaning that having someone with a strong Microsoft and XML programming skill set on board is a pre-requisite for doing anything serious with it.
  • MOSS is actually two major, separately developed applications (Windows Sharepoint Services and Content Management Server) that were hastily merged into one app.  As with a lot of immature Microsoft products, they seem to have been more motivated by marketing a powerful app than they were in making it actually functional.  Sharepoint 2013 or 2016 will likely be a good product, kind of like Exchange 2007 or SQL Server 2003, but Sharepoint 2007 makes a lot of promises that it doesn’t really keep.
  • Sharepoint’s primary structure is a collection of “sites”, each with it’s own URL, home page, and extensions. Without careful planning, Sharepoint can easily become a junkyard, with function-specific sites littered all over the map.  A number of bloggers are pushing a “Sharepoint invites Silos“ meme these days.  I stop short of blaming Sharepoint – it does what you plan for.  But if you don’t plan, or you don’t have the buy-in, attention and time commitment of key staff both in and out of IT, then silos are the easiest things for Sharepoint to do.
  • The database stores documents as database blobs, as opposed to linking to files on disk, threatening the performance of the database and putting the documents at risk of corruption. I don’t want to take my org’s critical work product and put it in a box that could easily break.
  • Licensing for use outside of my organization is complicated and expensive. MOSS access requires two or three separate licenses for each user – a Windows Server licence; a Sharepoint License, and, if you’re using the advanced Sharepoint features, an additional license for that functionality.  So, if I want to set up a site for our Board, or extend access to key partners or clients, It’s going to cost for each one.  There’s an option to buy an unlimited access license, but, the last time I looked, this was prohibitively expensive even at charity pricing.
  • Compared to most Open Source portals, Sharepoint’s hardware and bandwidth requirements are significantly high. Standard advice is that you will need additional, expensive bandwidth optimizing software in order to make it bearable on a WAN. For good performance on a modest installation, you’ll need at least two powerful servers, one for SQL Server and one for Sharepoint; for larger installations, a server farm.

I can’t help but contrast this with the far more manageable and affordable alternatives, even if those alternatives aren’t the kitchen sink that Sharepoint is.  Going with a non-Microsoft portal, I might lose all of that out of the box integration with my MS network, but I would jettison the complexity, demanding resources, and potential for confusion and site sprawl.  I’m not saying that any portal/intranet/knowledge management system can succeed without cross-departmental planning, but I am saying that the risk of a project being ignored — particularly if the financial investment was modest, and Sharepoint’s not cheap, even if the software can be — is easier to deal with than a project being fractured but critical.

If my goal is to promote collaboration and integrated work in my organization, using technology that transcends and discourages silos, I’m much better off with apps like Drupal, KnowledgeTree, Plone, or Salesforce, all of which do big pieces of what Sharepoint does, but require supplemental applications to match Sharepoint’s smorgasbord of functionality, but are much less complicated and expensive to deploy.

After four years of agonizing on this, here’s my conclusion: When the product matures, if I have organizational buy-in and interest; a large hardware budget; a high-performance Wide Area Network, and a budget for consulting, Sharepoint will be a great way to go. Under the conditions that I have today — some organizational buy-in; modest budget for servers and no budget for consulting; a decent network, but other priorities for the bandwidth, such as VOIP and video — I’d be much better served with the alternatives.

Paving the Road – a Shared Outcomes Success Story

This post was originally published on the Idealware blog in July of 2009.

I recently wrote about the potential for shared outcome reporting among nonprofits and the formidable challenges to getting there. This topic hits a chord for those of us who believe strongly that proper collection, sharing and analysis of the data that represents our work can significantly improve our performance and impact.

Shared outcome reporting allows an organization to both benchmark their effectiveness with peers, and learn from each others’ successful and failed strategies. If your most effective method of analyzing effectiveness is year to year comparisons, you’re only measuring a portion of the elephant. You don’t practice your work in a vacuum; why analyze it in one?

But, as I wrote, for many, the investment in sharing outcomes is a hard sell. Getting there requires committing scarce time, labor and resources to the development of the metrics, collection of data, and input; trust and competence in the technology; and partnering with our peers, who, in many cases, are also our competitors. And, in conditions where just keeping up with the established outcome reporting required for grant compliance is one of our greater challenges, envisioning diving into broader data collection, management and integration projects looks very hard to justify.

So let’s take a broader look this time at the justifications, rather than the challenges.

Success Measures is a social enterprise in DC that provides tools and consulting to organizations that want to evaluate their programs and services and use the resulting data. From their website:

Success Measures®, a social enterprise at NeighborWorks® America is an innovative participatory outcome evaluation approach that engages community stakeholders in the evaluation process and equips them with the tools they need to document outcomes, measure impact and inform change.

To accomplish this, in 2000, they set up an online repository of surveying and evaluation tools that can be customized by the participant to meet their needs. After determining what it is that they want to measure, participants work with their constituencies to gather baseline data. Acting on that data, they can refine their programs and address needs, then, a year or two later, use the same set of tools to re-survey and learn from the comparative data. Success Measures supplements the tools collection with training, coaching, and consulting to insure that their participants are fully capable of benefiting from their services. And, with permission, they provide cross-client metrics; the shared outcomes reporting that we’re talking about.

The tools work on sets of indicators, and they provide pre-defined sets of indicators as well as allowing for custom items. The existing sets cover common areas: Affordable housing; community building; economic development; race, class and community. Sets currently under development include green building/sustainable communities; community stabilization; measuring outcomes of asset programs; and measuring value of intermediary services.

Note that this resources nonprofits on both sides of the equation — they not only provide the shared metrics and accompanying insight into effective strategies for organizations that do what you do; they also provide the tools. This addresses one of the primary challenges, which is that most nonprofits don’t have the skills and staff required simply to create the surveying tools.

Once I understood what Success Measures was offering, my big question was, “how did you get any clients?” They had good answers. They actually engage more with the funders than the nonprofits, selling the foundations on the value of the data, and then sending them to their grantees with the recommendation. This does two important things:

  • First, it provides a clear incentive to the nonprofits. The funders aren’t just saying “prove that you’re effective”; they’re saying “here’s a way that you can quantify your success. The funding will follow.
  • Second, it provides a standardized reporting structure — with pre-developed tools and support — to the nonprofits. In my experience, having worked for an organization with multiple city, state and federal grants and funded programs, keeping up with the diverse requirements of each funding agency was an administrative nightmare.

So, if the value of comparative, cross-sector metrics isn’t reason enough to justify it, maybe the value of pre-built data collection tools is. Or, maybe the value of standardized reporting for multiple funding sources has a clear cost benefit attached. Or, maybe you’d appreciate a relationship with your funders that truly rewards you with grants based on your effectiveness. Success Measures has a model for all of the above.

The Road to Shared Outcomes

This post originally appeared on the Idealware Blog in May of 2009.

At the recent Nonprofit Technology Conference, I attended a somewhat misleadingly titled session called “Cloud Computing: More than just IT plumbing in the sky“. The cloud computing issues discussed were nothing like the things we blog about here (see Michelle’s and my recent “SaaS Smackdown” posts). Instead, this session was really a dive into the challenges and benefits of publishing aggregated nonprofit metrics. Steve Wright of the Salesforce Foundation led the panel, along with Lucy Bernholz and Lalitha Vaidyanathan. The session was video-recorded; you can watch it here.

Steve, Lucy and Lalithia painted a pretty visionary picture of what it would be like if all nonprofits standardized and aggregated their outcome reporting on the web. Lalithia had a case study that hit on the key levels of engagement: shared measurement systems; comparative performance measurement and a baked in learning process. Steve made it clear that this is an iterative process that changes as it goes — we learn from each iteration and measure more effectively, or more appropriately for the climate, each time.

I’m blogging about this because I’m with them — this is an important topic, and one that gets lost amidst all of the social media and web site metrics focus in our nptech community. We’re big on measuring donations, engagement, and the effectiveness of our outreach channels, and I think that’s largely because there are ample tools and extra-community engagement with these metrics — every retailer wants to measure the effectiveness of their advertising and their product campaigns as well. Google has a whole suite of analytics available, as do other manufacturers. But outcomes measurement is more particular to our sector, and the tools live primarily in the reporting functionality of our case and client management systems. They aren’t nearly as ubiquitous as the web/marketing analysis tools, and they aren’t, for the most part, very flexible or sophisticated.

Now, I wholly subscribe to the notion that you will never get anywhere if you can’t see where you’re going, so I appreciate how Steve and crew articulated that this vision of shared outcomes is more than just a way to report to our funders; it’s also a tool that will help us learn and improve our strategies. Instead of seeing how your organization has done, and striving to improve upon your prior year’s performance, shared metrics will offer a window into other’s tactics, allowing us all to learn from each others’ successes and mistakes.

But I have to admit to being a bit overwhelmed by the obstacles standing between us and these goals. They were touched upon in the talk, but not heavily addressed.

  • Outcome management is a nightmare for many nonprofits, particularly those who rely heavily on government and foundation funding. My brief forays into shared outcome reporting were always welcomed at first, then shot down completely, the minute it became clear that joint reporting would require standardization of systems and compromise on the definitions. Our case management software was robust enough to output whatever we needed, but many of our partners were in Excel or worse. Even if they’d had good systems, they didn’t have in-house staff that knew how to program them.
  • Outcomes are seen by many nonprofit executives as competitive data. If we place ours in direct comparison with the similar NPO down the street, mightn’t we just be telling our funders that they’re backing the wrong horse?
  • The technical challenges are huge — of the NPOs that actually have systems that tally this stuff, the data standards are all over the map, and the in-house skill, as well as time and availability to produce them, is generally thin. You can’t share metrics if you don’t have the means to produce them.

A particular concern is that all metrics are fairly subjective, as can happen when the metrics produced are determined more by the funding requirements than the NPO’s own standards. When I was at SF Goodwill, our funders were primarily concerned with job placements and wages as proof of our effectiveness. But our mission wasn’t one of getting people jobs; it was one of changing lives, so the metrics that we spent the most work on gathering were only partially reflective of our success – more outputs than outcomes. Putting those up against the metrics of an org with different funding, different objectives and different reporting tools and resources isn’t exactly apples to apples.

The benefits of shared metrics that Steve and crew held up is a worthwhile dream, but, to get there, we’re going to have to do more than hold up a beacon saying “This is the way”. We’re going to have to build and pave the road, working through all of the territorial disputes and diverse data standards in our path. Funders and CEOs are going to have to get together and agree that, in order to benefit from shared reporting, we’ll have to overcome the fact that these metrics are used as fodder in the battles for limited funding. Nonprofits and the ecosystem around them are going to have to build tools and support the art of data management required. These aren’t trivial challenges.

I walked into the session thinking that we’d be talking about cloud computing; the migration of our internal servers to the internet. Instead, I enjoyed an inspiring conversation that took place, as far as I’m concerned, in the clouds. We have a lot of work to do on the ground before we can get there.

Flying in Place: Videoconferencing

This was originally posted on the Earthjustice Blog in May of 2009.

As an information technology director whose livelihood depends pretty heavily on the use of electricity, I’m constantly looking for meaningful ways that the technology I’m immersed in can contribute to the reduction of greenhouse gases. The saying “If you aren’t part of the solution you’re part of the problem” doesn’t even suffice — technology is part of the problem, period, and it behooves people like me, who trade in it, to use it in ways that offset its debilitating effects on our environment.

This is why I’m very excited about an initiative that we have taken on to deploy videoconferencing systems in each of our nine locations.

Per a May, 2008 report by the Stockholm Environment Institute, aviation activities account for somewhere between 2% and 5% of the total anthropogenic Greenhouse Gas emissions. Our organization, with offices stretching from Honolulu to Anchorage to NYC and down to Tallahassee, has a great opportunity to eliminate much of our substantial air travel. If you’re in a similar circumstance, I thought it might be helpful to offer a rundown of the options ranging from free and easy to expensive but fantastic.

Cheap and easy means desktop video, which is far more suited for person-to-person chats at one’s desk than large meetings. While it’s certainly possible to hook up a PC to a projector and include someone in a conference room meeting this way, it’s a far cry from the experience you would have with actual videoconferencing equipment.

In general, the return on the investment will be in how successfully you can mimic being in the same room with your video attendees.

While only the richest of us can afford the systems that are installed as an actual wall in the conference room (commonly called “Telepresence”), connecting offices as if they were in the same place, a mid-range system with a large TV screen will, at least, make clear important things like body language and facial expressions, and be of a quality that syncs the voices to the images correctly. This makes a big difference in terms of the usefulness of the experience, and should be what justifies the expense over that of a simple conference phone.

Leader of the cheap and easy options is Skype. Once known as a way to do free phone calls over the Internet, Skype now does video as well. Of course, the quality of the call will vary greatly with the robustness of your internet connection, meaning it’s abysmal if a party is on dial-up and it’s great if all callers have very fast DSL/Cable connections or better.

Other free options might already be installed on your computer. the instant messaging applications like Windows Messenger, Yahoo! Messenger and iChat are starting to incorporate video, as well.

There are two ways to do Conference Room Video, one of which requires some investment, at least in a large TV display. One option is to do the conference in someone else’s room. Fedex/Kinko’s is one of many businesses that rent space with video equipment and support (note: it’s not supported at all locations). If your needs are occasional, this might prove more affordable than flying.

For a more permanent arrangement in your own digs, then you want to look at purchasing your own video equipment. This is the route that Earthjustice is taking. Vendors in this space include (and aren’t limited to) Polycom, Cisco, Tandberg andLifeSize. Options range from a simple setup, with a basic system in each office, to a more dynamic one using a multi-point bridge (definition below!). The key questions you need to ask before deciding what to buy are:

  • How many locations do I want to have video in?
  • What are is the maximum number of locations (“points”) that I want to connect in one call?
  • Do I want to regularly include parties from outside of my organization?
  • Do I have sufficient bandwidth to support this?
  • Do I want to incorporate presentations and computer access with the face to face meetings?
  • Do I want to support desktop computer connections to my system?
  • Do I want to have the ability to record conferences and optionally broadcast them over the web?

Standard videoconferencing equipment includes:

  • A Codec, which, much like a computer’s Central Processing Unit (CPU) is the brains of the equipment
  • One or two Displays (generally a standard TV set; for HD video an HDTV)
  • A Conference Phone
  • One or more Microphones
  • A Remote Control to control the camera and inputs
  • Cables to connect the network and optional input devices, such as a laptop computer

The Codec might be single point or multi-point, multi-point meaning that it is capable of connecting in multiple parties to the conference. You might want an additional display if you regularly do computer presentations at your meetings, so you can dedicate one screen to the presentation and the other to the remote participants. Most modern systems have a remote control that can not only control your camera, but also the camera in the remote location(s), assuming all systems are made by the same vendor.

Another option is to purchase a Conference Bridge (aka MCU). A bridge is a piece of equipment that provides additional functionality to the Codecs on your network, such as multi-point conferencing, session recording, and, possibly, desktop video.

Key questions that we had when we evaluated systems were: “How many points do your codecs connect to before we need to add a bridge?” and, “If numerous parties are connected, how does your system handle the video quality?” Some systems brought all connections down to the poorest quality connected; others were able to maintain different quality connections in different windows.

We also looked hard at the ease of use, but determined that all of these systems were about as complex as, say, adding a VCR or DVR to a cable TV setup. Some staff training is required.

On the real geeky side, we required that the systems do these protocols: Session Initialization Protocol (SIP) and H.323. These are the most common ways that one video system will connect with another over the Internet. By complying with these standards, we’ve had great success interoperating with other manufacturer’s systems.

Finally, we were able to go with a High Definition system, with great quality. This was largely enabled by the robust network we have here, as no system will work very well for you if you don’t have sufficient internet bandwidth to support this demanding application.

Conclusion: This is a somewhat simple distillation of a fairly complex topic, and the proper solution and impact of using video will vary from organization to organization. In our case, this will pay for itself quickly, and be scored as an easy win in our goal to reduce our carbon footprint. Compelling technology that supports our planet. Who can’t appreciate that?

Oldstyle Community Management

This article was originally published on the Idealware Blog in May of 2009.

pcboard_disk.jpgPhoto by ferricide
It’s been a big month for Online Community Management in my circles. I attended a session at the Nonprofit Technology Conference on the subject; then, a few weeks later, ReadWriteWeb released a detailed report on the topic. I haven’t read the report, but people I respect who have are speaking highly of it.Do you run an online community? The definition is pretty sketchy, ranging from a blog with active commenters to, say, America Online. If we define an online community as a place where people share knowledge, support, and/or friendship via communication forums on web sites or via email, there are plenty of web sites, NING groups, mailing lists and AOL chat rooms that meet that criteria.

The current interest is spurred by the notion that this is the required web 2.0/3.0 direction for our organizational web sites. We’ve made the move to social media (as this recent report suggests); now we need to be the destination for this online interaction. I don’t think that’s really a given, any more than it’s clear that diving into Facebook and Twitter is a good use of every nonprofit’s resources. It all depends on who your constituents are and how they prefer to interact with you. But, certainly, engagement of all types (charitable, political, commercial) is expanding on the web, and most of us have an audience of supporters that we can communicate with here.

Buried deep in my techie past is a three year gig as an online community manager. It was a volunteer thing. More honestly, a hobby. In 1988, I set up a Fidonet Bulletin Board System (BBS); linked it to a number of international discussion groups (forums); and built up a healthy base of active participants.

This was before the world wide web was a household term. I ran specific software that allowed people to dial in, via modem, to my computer, and either read and type messages on line or download them into something called a “QWK reader“; read and reply off line, and then synchronize with my system later. There were about 1000 bulletin board systems within the local calling distance in San Francisco at the time. Many of them had specific topics, such as genealogy or cooking; mine was a bit more generally focused, but I appealed to birdwatchers, because I published rare bird alerts, and to people who liked to talk politics. This was during the first gulf war, and many of my friends system’s were sporting American Flags (in ASCII Art), while my much more liberal board was the place to be if you were more critical of the war effort.

At the peak of activity, I averaged 200 messages a day in our main forum, and I’m pretty sure that the things that made this work apply just as much to the more sophisticated communities in play today. Those were:

    • Meeting a Need: There were plenty of people who desired a place to talk politics and share with a community, and there wasn’t a lot of competition. The bulk of my success was offering the right thing at the right time. It’s much tougher now to hang a shingle and convince people that your community will meet their needs when they have millions to choose from. How successful — and how useful — your community might be depends on how much of a unique need it serves.
    • Maintaining Focus: many of the popular bulletin boards had forums, online gaming, and downloads. My board had forums. The handful of downloads were the QWK readers and supporting software that helped people use the forums. The first time you logged on, you were subjected to a rambling bit of required reading that said, basically, “if birdwatching and chatting about the issues of the day interests you, keep on reading”, and I saw numerous people hang up before getting through that, which i considered a very good thing. The ones that made it through tended to be civil and engaged by what they signed on for. By focusing more on what made for a quality discussion, as opposed to trying to attract a large, diverse crowd, my base grew much bigger than I ever imagined it would.
    • Tolerance and Civility: We had a few conservatives among our active callers, and that kept the conversation lively. But we had excellent manners, never resorting to personal attacks and sending lots of private messages to the contrarians supporting their involvement. We really appreciated them, and they appreciated semi-celebrity status. It was all about the arguments, not about the attitude. Mind you, this was 1989/90 — I’m not sure if it’s possible to have civil public political debates today…
    • Active moderation: My hobby was a full time job that I did on top of my full time job. I engaged with my callers as if they were sitting in my living room, being gracious and helpful while I participated fully in the main events. There was a little moderation required to keep the tone civil, and making the board safe for all — particularly the ones with the minority opinions — required having their trust that I wouldn’t let any attacks get through without my response.

I think that the biggest question today is whether you should be building a community on your own, or engaging your community in the ample public places (Facebook, Twitter, etc.) that they might already hang out in. In fact, I think that where you engage is a fairly moot point, what’s important is that you do engage and provide a forum that helps people cope and learn about the issues that your organization is addressing. Pretty much all of the bulleted advice above will apply to your community, or out in the community.