Tag Archives: collaboration

Year-end Reflections

This post was originally published on the NTEN Blog on December 24th, 2015.

As years go, 2015 was a significant one in my career. The work of a CIO, or IT Director, or whatever title you give the person primarily responsible for IT strategy and implementation, is (ideally) two parts planning and one part doing. So in 2015—my third year at Legal Services Corporation—we did a couple of the big things that we’d been planning in 2013 and 2014.

First and foremost, we (and I do mean we—I play my part, but I get things done with an awesome staff and coworkers) rolled out the first iteration of our “Data Portal.” The vision for the Data Portal is that, as a funder that works primarily with 134 civil legal aid firms across the U.S. and territories, we should be able to access the relevant information about any grantee quickly and easily without worrying about whether we have the latest version of a document or report. To reach this vision, we implemented a custom, merged Salesforce/Box system. This entailed about a year of co-development with our partner, Exponent Partners, and a move from in-house servers to the Cloud. We’ll complete our Cloud “trifecta” in early 2016, when we go to Microsoft’s Office 365.

This was particularly exciting for me, because I have been envisioning and waiting for technology to reach a level of maturity and… collegiality that makes the vision of one place where documents and databases can co-exist a reality. Integration, and one-stop access to information, have always been the holy grails that I’ve sought for the companies that I’ve worked for; but the quests have been Monty Python-esque through the days when even Microsoft products weren’t compatible with each other, much less compatible with anything else. What we’ve rolled out is more of a stump than a tree; but in the next year we’ll grow a custom grants management system on top of that; and then we’ll incorporate everything pertinent to our grantees that currently hides in Access, Excel, and other places.

I’m working on a much more detailed case study of this project for NTEN to publish next year.

Secondly, we revamped our website, doing a massive upgrade from Drupal 7 to… Drupal 7! The website in place when I came to LSC was content-rich, navigation-challenged, and not too good at telling people what it is that we actually do.The four separate websites that made up our entire site weren’t even cross-searchable until we addressed that problem in early 2014. Internal terminology and acronyms existed on the front page and in the menus, making some things incomprehensible to the public, and others misleading. For example, we often refer to the law firms that we fund as “programs.” But, in the funding world, a “program” is a funding category, such as “arts” or “environment.” Using that terminology. along with too buried an explanation that what we actually do is allocate funding, not practice law ourselves, led many people to assume that we were the parent office of a nationwide legal aid firm, which we aren’t.

The new site, designed by some incredibly talented people at Beaconfire-RedEngine (with a particular call out to Eve Simon, who COMPLETELY got the aesthetic that we were going for and pretty much designed the site in about six hours), tells you up front who we are, what we do, and why civil legal aid is so important, in a country where the right to an attorney is only assured in criminal cases. While civil cases include home foreclosures, domestic violence, child custody, and all sorts of things that can devastate the lives of people who can’t afford an attorney to defend them. This new site looks just as good on a phone as on a computer, a requirement for the Twenty-Teens.

My happiness in life directly correlates to my ability to improve the effectiveness of the organizations that I work for, with meaningful missions like equal justice for all, defense against those who pollute the planet, and the opportunity to work, regardless of your situation in life. At my current job, we’re killing it.

The Future Of Technology

Jean_Dodal_Tarot_trump_01…is the name of the track that I am co-facilitating at NTEN’s Leading Change Summit. I’m a late addition, there to support Tracy Kronzak and Tanya Tarr. Unlike the popular Nonprofit Technology Conference, LCS (not to be confused with LSC, as the company I work for is commonly called, or LSC, my wife’s initials) is a smaller, more focused affair with three tracks: Impact Leadership, Digital Strategy, and The Future of Technology. The expectation is that attendees will pick a track and stick with it.  Nine hours of interactive sessions on each topic will be followed by a day spent at the Idea Accelerator, a workshop designed to jump-start each attendee’s work in their areas. I’m flattered that they asked me to help out, and excited about what we can do to help resource and energize emerging nptech leaders at this event.

The future of technology is also something that I think about often (hey, I’m paid to!) Both in terms of what’s coming, and how we (LSC and the nonprofit sector) are going to adapt to it. Here are some of the ideas that I’m bringing to LCS this fall:

  • At a tactical level, no surprise, the future is in the cloud; it’s mobile; it’s software as a service and apps, not server rooms and applications.
  • The current gap between enterprise and personal software is going to go away, and “bring your own app” is going to be the computing norm.
  • Software evaluation will look more at interoperability, mobile, and user interface than advanced functionality.  In a world where staff are more independent in their software use, with less standardization, usability will trump sophistication.  We’ll expect less of our software, but we’ll expect to use it without any training.
  • We’ll expect the same access to information and ability to work with it from every location and every device. There will still be desktop computers, and they’ll have more sophisticated software, but there will be less people using them.
  • A big step will be coming within a year or two, when mobile manufacturers solve the input problem. Today, it’s difficult to do serious content creation on mobile devices, due primarily to the clumsiness of the keyboards and, also, the small screens. They will come up with something creative to address this.
  • IT staffing requirements will change.  And they’ll change dramatically.  But here’s what won’t happen: the percentage of technology labor won’t be reduced.  The type of work will change, and the distribution of tech responsibility will be spread out, but there will still be a high demand for technology expertise.
  • The lines between individual networks will fade. We’ll do business on shared platforms like Salesforce, Box, and {insert your favorite social media platform here}.  Sharing content with external partners and constituents will be far simpler. One network, pervasive computing, no more firewalls (well, not literally — security is still a huge thing that needs to be managed).

This all sounds good! Less IT controlling what you can and can’t do. Consumerization demystifying technology and making it more usable.  No more need to toss around acronyms like “VPN.”

Of course, long after this future arrives, many nonprofits will still be doing things the old-fashioned ways.  Adapting to and adopting these new technologies will require some changes in our organizational cultures.  If technology is going to become less of a specialty and more of a commodity, then technical competency and comfort using new tools need to be common attributes of every employee. Here are the stereotypes that must go away today:

  1. The technophobic executive. It is no longer allowable to say you are qualified to lead an organization or a department if you aren’t comfortable thinking about how technology supports your work.  It disqualifies you.
  2. The control freak techie.  They will fight the adoption of consumer technology with tooth and claw, and use the potential security risks to justify their approach. Well, yes, security is a real concern.  But the risk of data breaches has to be balanced against the lost business opportunities we face when we restrict all technology innovation. I blogged about that here.
  3. The paper-pushing staffer. All staff should have basic data management skills; enough to use a spreadsheet to analyze information and understand when the spreadsheet won’t work as well as a database would.
  4. Silos, big and small. The key benefit of our tech future is the ability to collaborate, both inside our company walls and out. So data needs to be public by default; secured only when necessary.  Policy and planning has to cross department lines.
  5. The “technology as savior” trope. Technology can’t solve your problems.  You can solve your problems, and technology can facilitate your solution. It needs to be understood that big technology implementations have to be preceded by business process analysis.  Otherwise, you’re simply automating bad or outdated processes.

I’m looking forward to the future, and I can’t wait to dive into these ideas and more about how we use tech to enhance our operations, collaborate with our community and constituents, and change the world for the better.   Does this all sound right to you? What have I got wrong, and what have I missed?

Telecommuting Is About More Than Just The Technology

We’ve hit the golden age of telework, with myriad options to work remotely from a broadband-connected home, a hotel, or a cafe on a mobile device. The explosion of cloud and mobile technologies makes our actual location the least important aspect of connecting with our applications and data. And there are more and more reasons to support working remotely. Per Reuters, the state of commuting is a “virtual horror show”, with the average commute costing the working poor six percent of their income. It’s three percent for more wealthy Americans. And long commutes have negative impacts on health and stress levels. Add to this the potential cost savings if your headquarters doesn’t require an office or cubicle for every employee. For small NPOs, do you even really need an office? Plus, we can now hire people based on their absolute suitability to the job without requiring them to relocate. It’s all good, right?

Well, yes, if it’s done correctly.  And a good remote work culture requires more than seamless technology. Supervisors need to know how to engage with remote employees, management needs to know how to be inclusive, and the workers themselves need to know how to maintain relationships without the day to day exposure to their colleagues.  Moving to a telework culture requires planning and insight.  Here are a few things to consider.

Remote Workers Need To Be Engaged

I do my best to follow the rule of communicating with people in the medium that they prefer. I trade a lot of email with the people who, like me, are always on it; I pick up the phone for the people who aren’t; I text message with the staff that live on their smartphones. But, with a remote employee, I break that rule and communicate, primarily, by voice and video.  Emoticons don’t do much to actually communicate how you feel about what your discussing.  Your voice and mannerisms are much better suited for it.  And having an employee, or teammate, that you don’t see on a regular basis proves the old adage of “out of sight, out of mind”.

 In Person Appearances Are Required

For the remote worker to truly be a part of the organization, they have to have relationships with their co-workers.  Accordingly, just hiring someone who lives far away and getting them started as a remote worker might be the worst thing that you can do for them.  At a minimum, requiring that they work for two to four weeks at the main office as part of their orientation is quite justified.  For staff who have highly interactive roles, you might require a year at the office before the telework can commence.

Once the position is remote, in-person attendance at company events (such as all staff meetings and retreats) should be required. When on-site isn’t possible, include them via video or phone (preferably video). On-site staff need to remember them, and not forget to include them on invites. Staff should make sure that they’re in virtual attendance once the event occurs.

Technical Literacy Requirements Must Be High

It’s great that the remote access tech is now so prevalent, but the remote worker still needs to be comfortable and adept with technology.  If they need a lot of hand-holding, virtual hands won’t be sufficient.  Alternatively, the company might require (and/or assist with) obtaining local tech support.  But, with nonprofit IT staffing a tight resource, remote technophobes can make for very time-consuming customers. Establishing a computer-literacy test and making it a requirement for remote work is well-advised; it will ease a lot of headaches down the road.

Get The Policies In Place First

Here’s what you don’t want: numerous teleworkers with different arrangements.  Some have a company-supplied computer, some don’t.  The company pays for one person’s broadband account, but not another’s. One person has a company-supplied VOIP phone, the other uses their personal lines. I’ve worked at companies where this was all subject to hiring negotiations, and IT wasn’t consulted. What a nightmare! As with the office technology, IT will be much more productive if the remote setups are consistent, and the remote staff will be happier if they don’t feel like others get special treatment.

Go Forth And Telecommute

Don’t let any of this stop you — the workforce of the future is not nearly as geography bound as we’ve been in the past, and the benefits are compelling.  But understand that company culture is a thing that needs to be managed, and managed all the more actively when the company is more virtual.

How I Learned To Stop Worrying and Love The RFP

This article was originally posted on the NTEN Blog in January of 2014.

Requests for Proposals (RFPs) seem like they belong in the world of bureaucratic paperwork instead of a lean, tech-savvy nonprofit. There’s a lot that can be said for an RFP when both sides understand how useful a tool an RFP can be – even to tech-savvy nonprofits.

Here’s a safe bet: preparing and/or receiving Requests for Proposals (RFPs) is not exactly your favorite thing. Too many RFPs seem like the type of anachronistic, bureaucratic paperwork more worthy of the company in Office Space than a lean, tech-savvy nonprofit. So you may wonder why I would pitch a 90 minute session on the topic for this year’s Nonprofit Technology Conference. I’d like to make the case for you to attend my session: Requests for Proposals: Making RFPs Work for Nonprofits and Vendors.

The problems with RFPs are numerous, and many of you have tales from the trenches that could fill a few horror anthologies regarding them. I’ll be the first to agree that they often end up doing more harm than good for a project.  But I believe that this is due to a poor understanding of the purpose of the RFP, and a lack of expertise and creativity in designing them. What a successful RFP does is to help a client assess the suitability of a product or service to their needs long before they invest more serious resources into the project. That’s very useful.

The mission of the RFP is two-fold: a well written RFP will clearly describe the goals and needs of the organization/client and, at the same time, ask the proper questions that will allow the organization to vet the product or consultant’s ability to address those needs. Too often, we think that means that the RFP has to ask every question that will need to be asked and result in a detailed proposal with a project timeline and fixed price. But the situations where we know exactly, at the onset, what the new website, donor database, phone system or technology assessment will look like and should look like before the project has begun are pretty rare.

For a consultant, receiving an RFP for a web site project that specifies the number of pages, color scheme, section headings and font choices is a sign of serious trouble. Because they know, from experience, that those choices will change. Pitching a  fixed price for such a project can be dangerous, because as the web site is built, the client might find that they missed key components, or the choices that they made were wrong. It does neither party any good to agree to terms that are based on unrealistic projections, and project priorities often change, particularly with tech projects that include a significant amount of customization.

So you might be nodding your head right now and saying, “Yeah, Campbell, that’s why we all hate those RFPs. Why use ’em?” To which I say, “Why write them in such a way that they’re bound to fail?”

The secret to successful RFP development is in knowing which questions you can ask that will help you identify the proper vendor or product. You don’t ask how often you’ll be seeing each other next spring on the first date. Why ask a vendor how many hours they project it will take them to design each custom object in your as yet un-designed Salesforce installation? Some information will be more relevant — and easier to quantify — as the relationship progresses.

At the RFP session, we’ll dive into the types of questions that can make your RFP a useful tool for establishing a healthy relationship with a vendor. We’ll learn about the RFPs that consultants and software vendors love to respond to.  We’ll make the case for building a critical relationship in a proactive and organized fashion.  And maybe, just maybe, we’ll all leave the session with a newfound appreciation for the much-maligned Request for Proposal.

Don’t miss Peter’s session at the 14NTC on Friday, March 14, 3:30pm -5:00pm.

Peter Campbell is a nonprofit technology professional, currently serving as Chief Information Officer at Legal Services Corporation, an independent nonprofit that promotes equal access to justice and provides grants to legal aid programs throughout the United States. Peter blogs and speaks regularly about technology tools and strategies that support the nonprofit community.

The Five Best Tools For Quick And Effective Project Management

This article was first published on the NTEN Blog in March of 2011.

The keys to managing a successful project are buy-in and communication. Projects fail when all participants are on different pages. You want to use tools that your project participants can access easily, preferably ones they’re already using.

As an IT Director, co-workers, peers, and consultants frequently ask me, “Do you use Microsoft Project?” The answer to that question is a resounding denial.

Then I elaborate with my true opinion of Project: it’s a great tool if you’re building a bridge or a luxury hotel. But my Project rule of thumb is, if the budget doesn’t justify a full-time employee to manage the Project plan (e.g., keep the plan updated, not manage the project, necessarily), then MS Project is overkill. Real world projects require far more agile and accessible tools.

The keys to managing a successful project are buy-in and communication. The people who run the organization need to support it and the people the project is being planned for need to be expecting and anticipating the end result. Projects fail when all participants are on different pages: vague or different ideas of what the goals are; different levels of commitment; poor understanding of the deadlines; and poorly set expectations. GANTT charts are great marketing tools — senior executives never fail to be impressed by them — but they don’t tell the Facilities Coordinator in clear language that you need the facility booked by March 10th, or the designer that the web page has to be up by April 2nd.

You want to use tools that your project participants can access easily, preferably ones they’re already using. Here are five tools that are either free or you’ve already obtained, which, used together, will be far more effective than MS Project for the typical project at a small to mid-sized organization:

  • GanttProject. GanttProject is an open source, cross-platform project management tool. Think of it as MS Project lite. While the feature set includes identifying project resources, allocating time, and tracking completion, etc., it excels at creating GANTT charts, which can then be used to promote and communicate about the project. People appreciate visual aids, and GANTT charts visually identify the key tasks, milestones and timeframes. I don’t recommend diving into the resource allocations and the like, as I think that’s the point where managing the project plan starts becoming more work than managing the project.
  • Your email app. It’s all about communication: setting expectations, managing expectations, reminding and checking on key contributors so that deadlines are met. Everyone already lives in their email, so you want to visit them where they live. Related tool: the telephone.
  • MeetingWizard, Doodle, etc. We might gripe about meetings, but email alone does not cut it. If you want people to understand what you’re trying to accomplish — and care –they need to see your face and here the inflections in your voice when you tell them about it. By the same token, status updates and working out schedules where one person’s work depends on others completing theirs benefit greatly from face-to-face planning.
  • Excel (or any spreadsheet). Budgets, check off lists, inventory — a spreadsheet is a great tool for storing the project data. Worthy alternatives (and superior, because they’re multi-user): Sharepoint or Open Atrium.
  • Socialcast (or Yammer). Socialcast is Facebook for organizations. Share status, links, and files in a microblogging client. You can create categories and assign posts to them. The reasoning is the same as for the email, and email might be your fallback if your co-workers won’t take to microblogging, but if they’re open to it, it’s a great way to keep a group of people easily informed.

It’s not that there aren’t other good ways to manage projects. Basecamp, or one of the many similar web apps might be a better fit, particularly if the project team is widely dispersed geographically. Sharepoint can replace a number of the tools listed here. But you don’t really have to spend a penny. You do need to plan, promote, and communicate.

Projects don’t fail because you’re not using capital “P” Project. They fail when there isn’t buy-in, shared understanding, and lots of interaction.

Peter Campbell is currently the Director of Information Technology at Earthjustice, a non-profit law firm dedicated to defending the earth. Prior to joining Earthjustice, Peter spent seven years serving as IT Director at Goodwill Industries of San Francisco, San Mateo & Marin Counties, Inc. Peter has been managing technology for non-profits and law firms for over 20 years, and has a broad knowledge of systems, email and the web. In 2003, he won a “Top Technology Innovator” award from InfoWorld for developing a retail reporting system for Goodwill thrift. Peter’s focus is on advancing communication, collaboration and efficiency through creative use of the web and other technology platforms.

Delicious Memories

This article was originally published on the Idealware Blog in December of 2010.

Like many of my NPTECH peers, I was dismayed to learn yesterday that Delicious, the social bookmarking service, was being put to pasture by Yahoo!, the big company that purchased the startup five years ago.  Marshall Kirkpatrick of ReadWriteWeb has written the best memorial,  But the demise of Delicious marks a passing of significant note to our community of nonprofit staff that seek innovative uses of technology.  So let me talk quickly about how Delicious brought me into this community, and, along the way, a bit about what it meant to all of us.

In 2002, I was wrapped up in my job as VP of Information Technology at San Franciscco Goodwill.  At that time, the buzz term was “Web 2.0”, and it was all over the tech press with about a thousand definitions.  We all knew that “Web 2.0” meant the evolution of the web from a straight publisher to consumer distribution medium to something more interactive, but nobody knew exactly what. Around that time, I started reading columns by Jon Udell about RSS, technology that would, as a simpler, subset of XML, helps us share web-based information the way that newspapers share syndicated content, such as comic strips and columns.  I was really intrigued.  The early adopters of RSS were bloggers, and what I think was very cool about this is that RSS was free technology that, like the web, advanced the opportunities of penniless mortals to become global publishers.  People who couldn’t tell an XML feed from an XL T-Shirt were championing an open standard, because it served as the megaphone in front of their soapboxes.

I kept my eye out for innovative uses of RSS,a nd quickly discovered Joshua Schacter’s del.icio.us website.  This was a social bookmarking service where, by adding a little javascript link to your web browsers bookmark bar (or quick links, or whatever), you could quickly save any web page you enjoyed to an online repository for later retrieval.  That repository was public, so others could see what you found valuable as well.  But this is where Schacter jumped the gun, and championed two information technology strategies that have, since that time, significantly changed the web: tagging and rss.

Tagging

In addition to the link and a brief description, you could add keywords to each bookmark, and then later find related bookmarks by that keyword.  You could just find the bookmarks that you tagged with a word, or you could find the tags that anyone using Delicious tagged with that word.  So, if you were studying the russian revolution, you could search Delicious for russia+revolution and find every bookmark that anyone had saved,   This was different than searching for the same terms in Google or yahoo, because the results weren’t just the most read; they were the sites that were meaningful enough to people to actually be saved.  Delicious became, as Kirkpatrick points out,  a mass-curated collection of valuable information, more like wikipedia than, say, Yahoo Directory.  Delicious was the lending library of the web.

RSS

In addition to searching the site for tags by keyword and/or user, any results your searching found could be subscribed to via RSS.  This was crazy powerful! Not only could you follow topics of interest, but, using PHP add-ons like MagpieRSS or aggregation functions like those built into Drupal, Joomla, and pretty much any major Content Management System, you could quickly incorporate valuable, easily updated content into your website.  I immediately replaced my static “Links” page on my website to one that grabbed items witha  particular keyword from Delicious, so that updating that Links page was as easy as bookmarking a site that I wanted listed there.

NPTECH

I wasn’t the only nonprofit strategist taking note of these developments.  One day, while browsing items that Delicious termed Popular (e.g., bookmarks that multiple people had saved to the site), I noted a blog entry titled “The Ten Reasons Nonprofits Should Use RSS“.  The article was written by one Marnie Webb of CompuMentor (now better known as TechSoup, where she is one of the CEOs).  A week or so later, while following the office email mailing lis for Delicious, I encountered Marnie again, and, this time, emailed her and suggested that we meet for lunch, based on our clearly common interest in nonprofits and RSS.  Marnie told me about the NPTech Tagging Project, and effort she started by simply telling her friends to tag websites related to nonprofit technology with the tag “nptech” on Delicious, so that we could all subscribe to that tag in our RSS readers.

Marnie and I believe that what we started was the first mass information referral system of this type.  In 2005 we took it up a level by creating the nptech.info website, which aggregates items tagged with nptech from Delicious, Twitter, Flicker and numerous other sources across the web. Nptech.info is now more widely read via it’s Twitter feed, @nptechinfo.

I think it’s safe to say that the nptech tagging project grew from a cool and useful idea and practice into a community, and a way that many of us identify who we are to the world.  I’m a lot of things, but nptechie sums most of them up into one simple word.  I know that many of you identify yourselves that way as well.

An offshoot of meeting Marnie on the Delicious mailing list was that she introduced me to NTEN, and brought me into the broad community of nptech, and my current status as a blogger, writer, presenter, Idealware board member and happy member of this broad community ties directly back to the Delicious website.  I stopped using the site as a bookmarking service some time ago, as efforts that it inspired (like Google Reader sharing)  became more convenient.  But I still subscribe to Delicious feeds and use it in websites.  It’s demise will likely be the the end of nptech,info.  Efforts are underway to save it, so we’ll see.  But even if this article is the first you’ve heard of Delicious, it’s important to know that it played a role in the evolution of nonprofit technology as the arbiter of all things nptech.  It’s ingenuity and utility will be sorely missed.

What’s Up With The TechSoup Global/GuideStar International Merger?

This article was first published on the Idealware Blog in April of 2010.

TechSoup/GuideStar Int'l Logos

TechSoup Global (TSG) mergedwith GuideStar International (GSI) last week. Idealware readers are likely well-familiar with TechSoup, formerly CompuMentor, a nonprofit that supports other nonprofits, most notably through their TechSoup Stock software (and hardware) donation program, but also via countless projects and initiatives over the last 24 years. GuideStar International is an organization based in London that also works to support nonprofits by reporting on their efforts and promoting their missions to potential donors and supporters.

I spoke with Rebecca Masisak and Marnie Webb, two of the three CEOs of TechSoup Global (Daniel Ben-Horin is the founder and third CEO), in hopes of making this merger easier for all of us to understand. What I walked away with was not only context for the merger, but also a greater understanding of TechSoup’s expanded mission.

Which GuideStar was that?

One of the confusing things about the merger is that, if you digested the news quickly, you might be under the impression that TechSoup is merging with the GuideStar that we in the U. S. are well acquainted with. That isn’t the case. GuideStar International is a completely separate entity from GuideStar US, but with some mutual characteristics:

  • Both organizations were originally founded by Buzz Schmidt, the current President of GuideStar International;
  • They share a name and some agreements as to branding;
  • They both report on the efforts of charitable organizations, commonly referred to as nonprofits (NPOs) in the U.S.; Civil Society Organizations (CSOs) in the U.K.; or Non Governmental Organizations (NGOs) across the world.

Will this merger change the mission of TechSoup?

TechSoup Global’s mission is working toward a time when every nonprofit and NGO on the planet has the technology resources and knowledge they need to operate at their full potential.

GuideStar International seeks to illuminate the work of every civil society organisation (CSO) in the world.

Per Rebecca, TechSoup’s mission has been evolving internally for some time. The recent name change from TechSoup to TechSoup Global is a clear indicator of their ambition to expand their effectiveness beyond the U.S. borders, and efforts like NGOSource, which helps U.S. Foundations identify worthy organizations across the globe to fund, show a broadening of their traditional model of coordinating corporate donors with nonprofits.

Unlikely Alliances

TechSoup opened their Fundacja TechSoup office in Warsaw, Poland two years ago, in order to better support their European partners and the NGO’s there. They currently work with 32 partners outside of the United States. The incorporation of GSI’s London headquarters strengthens their European base of operations, as well as their ties to CSOs, as both TechSoup and GSI have many established relationships. GSI maintains an extensive database, and TechSoup sees great potential in merging their strength, as builders of relationships between entities both inside and outside of the nonprofit community, with a comprehensive database of organization and missions.

This will allow them, as Rebecca puts it, to leverage an “unlikely alliance” of partners from the nonprofit/non-governmental groups, corporate world, funders and donors, and collaborative partners (such as Idealware) to educate and provide resources to worthwhile organizations.

Repeatable Practices

After Rebecca provided this context of TSG’s mission and GSI’s suitability as an integrated partner, Marnie unleashed the real potential payload. The goal, right in line with TSG’s mission, is to assist CSOs across the globe in the task of mastering technology in service to their missions. But it’s also to take the practices that work and recreate them. With a knowledge base of organizations and technology strategies, TechSoup is looking to grow external support for the organizations they serve by increasing and reporting on their effectiveness. Identify the organizations, get them resources, and expose what works.

All in all, I’m inspired by TSG’s expanded and ambitious goals, and look forward to seeing the great things that are likely to come out of this merger.

Wave Impressions

This post originally appeared on the Idealware Blog in November of 2009.

A few months ago, I blogged a bit about Google Wave, and how it might live up to the hype of being the successor to email.  Now that I’ve had a month or so to play with it, I wanted to share my initial reactions.  Short story: Google Wave is an odd duck, that takes getting used to. As it is today, it is not that revolutionary — in fact, it’s kind of redundant. The jury is still out.

Awkwardness

To put Wave in perspective, I clearly remember my first exposure to email.  I bought my first computer in 1987: a Compaq “portable”. The thing weighed about 60 pounds, sported a tiny green on black screen, and had two 5 and 1/4 inch floppy drives for applications and storage).  Along with the PC, I got a 1200 BPS modem, which allowed me o dial up local bulletin boards.  And, as I poked around, I discovered the 1987 version of email: the line editor.

On those early BBSes, emails were sent by typing one line (80 characters, max) of text and hitting “enter”.  Once “enter” was pressed, that line was sent to the BBS.  No correcting typos, no rewriting the sentence.  It was a lot like early typewriters, before they added the ability to strike out previously submitted text.

But, regardless of the primitive editing capabilities, email was a revelation.  It was a new medium; a form of communication that, while far more awkward than telephone communications, was much more immediate than postal mail.  And it wasn’t long before more sophisticated interfaces and editors made their way to the bulletin boards.

Google Wave is also, at this point, awkward. To use it, you have to be somewhat self-confident right from the start, as others are potentially watching every letter that you type.  And while it’s clear that the ability to co-edit and converse about a document in the same place is powerful, it’s messy.  Even if you get over the sprawling nature of the conversations, which are only minimally better than  what you would get with ten to twenty-five people all conversing in one Word document, the lack of navigational tools within each wave is a real weakness.

Redundant?

I’m particularly aware of these faults because I just installed and began using Confluence, a sophisticated, enterprise Wiki (free for nonprofits) at my organization. While we’ve been told that Wave is the successor to email, Google Docs and, possibly, Sharepoint, I have to say that Confluence does pretty much all of those things and is far more capable.  All wikis, at their heart, offer collaborative editing, but the good ones also allow for conversations, plug-ins and automation, just as Google Wave promises.  But with a wiki, the canvas is large enough and the tools are there to organize and manage the work and conversation.  With Wave, it’s awfully cramped, and somewhat primitive in comparison.

Too early to tell?

Of course, we’re looking at a preview.  The two things that possibly differentiate Wave from a solid wiki are the “inbox” metaphor and the automation capabilities. Waves can come to you, like email, and anyone who has tried to move a group from an email list to a web forum knows how powerful that can be. And Wave’s real potential is in how the “bots”, server-side components that can interact with the people communicating and collaborating, will integrate the development and conversation with existing data sources.  It’s still hard to see all of that in this nascent stage.  Until then, it’s a bit chicken and egg.

Wave starting points

There are lots of good Wave resources popping up, but the best, hands down, is Gina Trapini’s Complete Guide, available online for free and in book form soon. Gina’s blog is a must read for people who find the types of things I write about interesting.

Once you’re on wave, you’ll want to find Waves to join, and exactly how you do that is anything but obvious.  the trick is to search for a term “such as “nonprofit” or “fundraising” and add the phrase “with:public”. A good nonprofit wave to start with is titled, appropriately, “The Nonprofit Technology Wave”.

If you haven’t gotten a Wave invite and want to, now is the time to query your Twitter and Facebook friends, because invites are being offered and we’ve passed the initial “gimme” stage.  In fact, I have ten or more to share (I’m peterscampbell on most social networks and at Google’s email service).

Why Geeks (like Me) Promote Transparency

This post was originally published on the Idealware Blog in November of 2009.
Mizukurage.jpg
Public Domain image by Takada

Last week, I shared a lengthy piece that could be summed up as:

“in a world where everyone can broadcast anything, there is no privacy, so transparency is your best defense.”

(Mind you, we’d be dropping a number of nuanced points to do that!)

Transparency, it turns out, has been a bit of a meme in nonprofit blogging circles lately. I was particularly excited by this post by Marnie Webb, one of the many CEO’s at the uber-resource provider and support organization Techsoup Global.

Marnie makes a series of points:

Meaningful shared data, like the Miles Per Gallon ratings on new car stickers or the calorie counts on food packaging help us make better choices;But not all data is as easy to interpret;Nonprofits have continually been challenged to quantify the conditions that their missions address;

Shared knowledge and metrics will facilitate far better dialog and solutions than our individual efforts have;

The web is a great vehicle for sharing, analyzing and reporting on data;

Therefore, the nonprofit sector should start defining and adopting common data formats that support shared analysis and reporting.

I’ve made the case before for shared outcomes reporting, which is a big piece of this. Sharing and transparency aren’t traditional approaches to our work. Historically, we’ve siloed our efforts, even to the point where membership-based organizations are guarded about sharing with other members.

The reason that technologists like Marnie and I end up jumping on this bandwagon is that the tech industry has modeled the disfunction of a siloed approach better than most. early computing was an exercise in cognitive dissonance. If you regularly used Lotus 123, Wordperfect and dBase (three of the most popular business applications circa 1989) on your MS-DOS PC, then hitting “/“, F7 or “.” were the things you needed to know in order to close those applications respectively. For most of my career, I stuck with PCs for home use because I needed compatibility with work, and the Mac operating system, prior to OSX, just couldn’t easily provide that.

The tech industry has slowly and painfully progressed towards a model that competes on the sales and services level, but cooperates on the platform side. Applications, across manufacturers and computing platforms, function with similar menus and command sequences. Data formats are more commonly shared. Options are available for saving in popular, often competitive formats (as in Word’s “Save As” offering Wordperfect and Lotus formats). The underlying protocols that fuel modern operating systems and applications are far more standardized. Windows, Linux and MacOS all use the same technologies to manage users and directories, network systems and communicate with the world. Microsoft, Google, Apple and others in the software world are embracing open standards and interoperability. This makes me, the customer, much less of an innocent bystander who is constantly sniped by their competitive strategies.

So how does this translate to our social service, advocacy and educational organizations? Far too often, we frame cooperation as the antithesis to competition. That’s a common, but crippling mistake. The two can and do coexist in almost every corner of our lives. We need to adopt a “rising tide” philosophy that values the work that we can all do together over the work that we do alone, and have some faith that the sustainable model is an open, collaborative one. Looking at each opportunity to collaborate from the perspective of how it will enhance our ability to accomplish our public-serving goals. And trusting that this won’t result in the similarly-focused NGO down the street siphoning off our grants or constituents.

As Marnie is proposing, we need to start discussing and developing data standards that will enable us to interoperate on the level where we can articulate and quantify the needs that our mission-focused organizations address. By jointly assessing and learning from the wealth of information that we, as a community of practice collect, we can be far more effective. We need to use that data to determine our key strategies and best practices. And we have to understand that, as long as we’re treating information as competitive data; as long as we’re keeping it close to our vests and looking at our peers as strictly competitors, the fallout of this cold war is landing on the people that we’re trying to serve. We owe it to them to be better stewards of the information that lifts them out of their disadvantaged conditions.

Swept Up in a Google Wave

This article was originally published on the Idealware Blog in September of 2009.

mailbox.jpg
Photo by Mrjoro.

Last week, I shared my impressions of Google Wave, which takes current web 2.0/Internet staple technologies like email, messaging, document collaboration, widgets/gadgets and extranets and mashes them up into an open communications standard that, if it lives up to Google’s aspirations, will supersede email.  There is little doubt in my mind that this is how the web will evolve.  We’ve gone from:

  • The Yahoo! Directory model – a bunch of static web sites that can be cataloged and explored like chapters in a book, to
  • The Google needle/haystack approach – the web as a repository of data that can be mined with a proper query, to
  • Web 2.0, a referral-based model that mixes human opinion and interaction into the navigation system.

For many of us, we no longer browse, and we search less than we used to, because the data that we’re looking for is either coming to us through readers and portals where we subscribe to it, or it’s being referred to us by our friends and co-workers on social networks.  Much of what we refer to each other is content that we have created. The web is as much an application as it is a library now.

Google Wave might well be “Web 3.0“, the step that breaks down the location-based structure of web data and replaces it completely with a social structure.  Data isn’t stored as much as it is shared.  You don’t browse to sites; you share, enhance, append, create and communicate about web content in individual waves.  Servers are sources, not destinations in the new paradigm.

Looking at Wave in light of Google’s mission and strategy supports this idea. Google wants to catalog, and make accessible, all of the world’s information. Wave has a data mining and reporting feature called “robots”. Robots are database agents that lurk in a wave, monitoring all activity, and then pop in as warranted when certain terms or actions trigger their response.  The example I saw was of a nurse reporting in the wave that they’re going to give patient “John Doe” a peanut butter sandwich.  The robot has access to Doe’s medical record, is aware of a peanut allergy, and pops in with a warning. Powerful stuff! But the underlying data source for Joe’s medical record was Google Health. For many, health information is too valuable and easily abused to be trusted to Google, Yahoo!, or any online provider. The Wave security module that I saw hid some data from Wave participants, but was based upon the time that the person joined the Wave, not ongoing record level permissions.

This doesn’t invalidate the use of Wave, by any means — a wave that is housed on the Doctor’s office server, and restricted to Doctor, Nurse and patient could enable those benefits securely. But as the easily recognizable lines between cloud computing and private applications; email and online community; shared documents and public records continue to blur, we need to be careful, and make sure that the learning curve that accompanies these web evolutions is tended to. After all, the worst public/private mistakes on the internet have generally involved someone “replying to all” when they didn’t mean to. If it’s that easy to forget who you’re talking to in an email, how are we going to consciously track what we’re revealing to whom in a wave, particularly when that wave has automatons popping data into the conversation as well?

The Wave as internet evolution idea supports a favored notion: data wants to be free. Open data advocates (like myself) are looking for interfaces that enable that access, and Wave’s combination of creation and communication, facilitated by simple, but powerful data mining agents, is a powerful frontend.  If it truly winds up as easy as email, which is, after all, the application that enticed our grandparents to sue the net, then it has culture-changing potential.  It will need to bring the users along for that ride, though, and it will be interesting to see how that goes.

——–

A few more interesting Google Wave stories popped up while I was drafting this one. Mashable’s Google Wave: 5 Ways It Could Change the Web gives some concrete examples to some of the ideas I floated last week; and, for those of you lucky enough to have access to Wave, here’s a tutorial on how to build a robot.

Beta Google Wave accounts can be requested at the Wave website.  They will be handing out a lot more of them at the end of September, and they are taking requests to add them to any Google Domains (although the timeframe for granting the requests is still a long one).