Monthly Archives: April 2009

How to Send an All Staff Technical Email

This post was originally published on the Idealware Blog in April of 2009.

I had big plans for another insightful, deep, break-down-the-walls-of-the-corporate-culture-that-diminishes-use-of-technology post today, but I think I’m gonna save it for a rainy day and write something a bit more useful, instead.  I have a big nonprofit technology conference coming up this weekend, as you might, as well, and I think we should all be resting up for it.

The most important skill for any IT staff person to have is the ability to communicate.  All of the technical expertise in the world has little value without it, because, if you can’t tell people what you’re doing, what you’re doing won’t be well-received.  And there is an art, particularly with tech, to telling people what you’re doing, whether it’s taking the system down for maintenance of upgrading staff from Notepad to Office 2007.

Here are my five rules for crafting an technical email that even my most computer-phobic constituents will read:

  • Let no acronym go unexplained

The simplest, worst mistake that techies regularly make is to tell people that

“The internet will be down while we reconfigure the DHCP server” or

“The database will be unavailable while we replace the SCSI backplane”.

Best practice is to avoid the technical details in the announcement, if possible.  But if it’s relevant, speak english: “In order to accommodate the growth of our staff, we need to reconfigure the server that assigns network resources to each system to allow for more connections.”

  • Be clear, concise and consistent in your subjects

Technical messages should have easily recognizable subjects, so that staff can quickly determine relevance.  If your message is titled “Technical Information”, it might as well be titled “You are getting sleepy…”  But, if it’s titled “Network Availability” or “Database Maintenance Scheduled”, your staff will quickly figure out that these are warnings that are relevant to them. Don’t worry about the Orwellian aspect of announcing system downtime with a message about availability.  The point here is that using the consistent phrasing will grab staff’s attention far more effectively than bolding, underlining and adding red exclamation points to the email (see rule 4).

  • Keep it short and simple

It’s about what the staff needs to know, not what you’d like to tell them.  So, the network maintenance email should not read:

“The systems will be down from 4:30 to 9:00 tonight while we replace drives in the domain controllers and run a full defrag on the main document server”

It should read:

“The network will be unavailable from 4:30 pm until 9:00 pm while we perform critical maintenance”.

If it’s only a portion of the network, but something useful will be up – as when the file servers are being repaired, but email is still available, make a note of that: “While the main servers will not be available, you will still be able to send and receive email”.

  • No ALL CAPS, no exclamation points!!! and go sparingly on the bold

System downtime might be urgent to you, but it’s never urgent to the staff.  It’s a fact of life.  A reply from the Director of Online Giving that the downtime will jettison a planned online campaign is urgent; not your routine announcement.

  • Tell the whole story

…even if this sounds like it conflicts with rule 3.  Because there are two types of people on your staff:

  1. The majority, who want simple, non-techie messages as described above
  2. The rest, who want the gory details, either so they can rest easy that you aren’t making anything up, or because they’re actually interested in what you’re up to.

My approach is to do the simple message and, below it type, “Technical Details (optional reading)”.  In this section I might explain that we’re replacing the server that processes their network logins (I won’t use “DHCP” or “Domain Controller” if I can help it) or that we’re upgrading to the new version of Outlook.

The key concepts here are consistency, simplicity, and a focus on what impacts them regarding what you’re doing.  Stick to it and, miraculously, people might start reading your all staff emails.

Where I’ll Be at NTC

Five days from now, the Nonprofit Technology Conference starts here on my home turf, in San Francisco, and I’m hoping to catch a few seconds or more of quality time with at least 200 of the 1400 people attending. Mind you, that’s in addition to meeting as many new people as possible, since making connections is a lot of what NTC is about. So, in case you’re trying to track me down, here’s how to find me at NTC.

Saturday — I’ll be home prepping, on email and Twitter, and then off to Jupiter in Berkeley (2181 Shattuck, right at Downtown Berkeley BART) at 6:00 pm for the Pre-NTC Brewpub Meetup I’m hosting. We have a slew of people signed up at NTConnect for the event. If you’re coming, get there promptly so you can help me reserve adequate space!

Sunday morning is Day of Service. I’ll be advising a local education nonprofit on low cost options for enhanced voice and video. NTC kicks off with the Member Reception, and I suspect that there will be lots of talk about our book at that event – if we’ve never met, this will be a good chance to figure out which of the 1400 attendees I am.

The Science Fair – NTEN’s unique take on the vendor show – is always a blast. If you’re at a booth, I’ll be coming by, but I’ll also be spending some time manning the Idealware booth, so that’s another good place to catch up. Dinner Sunday? I haven’t made plans. What are you doing?

Monday I keep busy hosting two sessions:

At 3:30, I’m at a loss, with excellent sessions by Peter Deitz, Allen (Gunner) Gunn, David Geilhufe, Dahna Goldstein, Jeff Patrick, Robert Weiner and Steve Wright all competing equally for my attention. If Hermione Granger is reading this, perhaps she can help me out.

On Tuesday, my tentative plan includes these breakouts: Google Operations: Apps and Analytics; Evolution of Online Communities : Social Networking for Good; and Measuring the Return on Investment of Technology. I caught a preview of the last one, led by Beth Kanter, at a Pre-NTC get together we did at Techsoup last month; it’s going to be awesome.

As a local co-host of the 501 Tech Club and a member of this year’s planning committee, I consider myself one of your hosts and am happy to answer any questions I have about what there is to do in the Bay Area, where I’ve lived since 1986. The best way to reach me is always on Twitter – if you’re attending the conference, following me, and I don’t figure that out and follow you right back, then send me a quick tweet letting me know you’re at NTC and I will (although, disclaimer required, I will quickly block people who use Twitter as a means to market products to my org). If you haven’t already gotten this hint, Twitter is an awesome way to keep connected during an event like this.

The ROI on Flexibility

This post originally appeared on the Idealware Blog in April of 2009.

Non Profit social media maven Beth Kanter blogged recently about starting up a residency at a large foundation, and finding herself in a stark transition from a consultant’s home office to a corporate network. This sounds like a great opportunity for corporate culture shock. When your job is to download many of the latest tools and try new things on the web that might inform your strategy or make a good topic for your blog, encountering locked-down desktops and web filtering can be, well, annoying is probably way to soft a word. Beth reports that the IT Team was ready for her, guessing that they’d be installing at least 72 things for her during her nine month stay. My question to Beth was, “That’s great – but are they just as accommodating to their full-time staff, or is flexibility reserved for visiting nptech dignitaries?”

The typical corporate desktop computer is restricted by group policies and filtering software. Management, along with the techs, justify these restrictions in all sorts of ways:

  • Standardized systems are easier, more cost-effective to manage.
  • Restricted systems are more secure.
  • Web filtering maximizes available bandwidth.

This is all correct. In fact, without standardization, automation, group policies that control what can and can’t be done on a PC, and some protection from malicious web sites, any company with 15 to 20 desktops or more is really unmanageable. The question is, why do so many companies take this ability to manage by controlling functionality to extremes?

Because, in many/most cases, the restrictions put in place are far broader than is necessary to keep things manageable. Web filtering not only blocks pornography and spyware, but continues on to sports, entertainment, politics, and social networking. Group policies restrict users from changing their desktop colors or setting the system time. And the end result of using the standardization tools to intensively control computer usage results, most often, in IT working just as hard or harder to manage the exceptions to the rules (like Beth’s 72, above) than they would dealing with the tasks that the automation simplifies in the first place.

Restricting computer/internet use is driven by a management and/or IT assumption that the diverse, dynamic nature of computing is either a distraction or a problem. The opportunity to try something new is an opportunity to waste time or resources. By locking down the web; eliminating a user’s ability to install applications or even access settings, PC’s can be engineered back down to the limited functionality of the office equipment that they replaced, such as typewriters, calculators and mimeograph machines.

In this environment, technology is much more of a controlled, predictable tool. But what’s the cost of this predictability?

  • Technology is not fully appreciated, and computer literacy is limited in an environment where users can’t experiment.
  • Strategic opportunities that arise on the web are not noticed and factored into planning.
  • IT is placed in the role of organizational nanny, responsible for curtailing computer use, as opposed to enabling it.

Cash and resource-strapped, mission-focused organizations only need look around to see the strategic opportunities inherent in the web. There are an astounding number of free, innovative tools for activism and research. Opportunities to monitor discussion of your organization and issues, and meaningfully engage your constituents are huge. And all of this is fairly useless if your staff are locked out of the web and discouraged from exploring it. Pioneers like Beth Kanter understand this. They seek out the new things and ask, how can this tool, this web site, this online community serve our sector’s goals to ease suffering and promote justice? More specifically, can you end hunger in a community with a widget? Or bring water to a parched village via Twitter? If our computing environment is geared to stifle innovation at the cost of security, are we truly supporting technology?

As the lead technologist at my organization, I want to be an enabler. I want to see our attorneys use the power of the web to balance the scales when we go to court against far better resourced corporate and government counsel. In this era of internet Davids taking down Goliaths from the RIAA the the mainstream media, I don’t want my co-workers to miss out on any opportunities to be effective. So I need the flexibility and perspective to understand that security is not something that you maintain with a really big mallet, lest you stamp out innovation and strategy along with the latest malware. And, frankly, cleaning a case of the conflickr worm off of the desktop of an attorney that just took down a set of high-paid corporate attorneys with data grabbed from some innovative mapping application that our web-filtering software would have mistakenly identified as a gaming site is well worth the effort.

Flexibility has it’s own Return on Investment (ROI), particularly at nonprofits, where we generally have a lot more innovative thinking and opportunistic attitude than available budget. IT has to be an enabler, and every nonprofit CIO or IT Director has to understand that security comes at a cost, and that cost could be the mission-effectiveness of our organizations.

More RSS Tools: Sharing Feeds

This post was first published on the Idealware Blog in April of 2009.

For my last followup to my RSS article, Using RSS Tools to Feed your Information Needs, I want to discuss OPML, the standard for RSS Reader feed information, and talk a bit about why RSS, which is already quite useful, is about to become an even bigger deal. Last week, I discussed sharing research with Google Reader; before that, filtering RSS feeds with Yahoo! Pipes, and I started with a post about integrating content with websites.

Admitting that I might represent an extreme case, I subscribe to 96 feeds in Google Reader. I started with Google Reader last December – prior to that, I used a Mac RSS Reader called Vienna. Moving from Vienna to Google Reader might have been a chore, but it wasn’t, thanks to Outline Processor Markup Language (OPML). The short story on OPML is that it was developed as a standard format for outlining. While it is used in that capacity, it’s more commonly used as a format for collecting a list of RSS feeds, with last read pointers, that can then be processed by other feed-reading software. So, I exported all of my feeds from Vienna to a .opml file, then I imported that into Google Reader, and all of my feeds were instantly set up. If you run a WordPress blog, you can rapidly build your blogroll by importing an .opml file.

In addition to sharing feed information with applications, OPML can be used to share a group of feeds with a co-worker, friend or constituent. Say your org does advocacy on a particular issue, and you’ve collected a set of feeds that represent the best news and commentary on your issue. You could make the OPML file available on your web site for your constituents to incorporate in their readers.

At this point, you might be saying to yourself, “what are the odds that my constituents even know what a feed reader is? Wouldn’t making this available be more likely to confuse than help people?” As good as a question as that is, here’s why I think that you won’t be asking it soon. RSS has seen quick and steady adoption as a standard web service. Four years ago, it was obscure; today every content management system and web portal supports it. It features prominently in the strategic plans of tech giants like Google, Microsoft and Yahoo!. But it’s not as well-known by the general computing public — RSS still has yet to become a real household concept, like search and email have. The game-changer is underway, though. Last month, The Seattle Post-Intelligenser, one of Seattle’s primary daily papers, ceased print publication. The San Francisco Chronicle announced last month that they are making one last ditch effort, with a redesign and new printing presses, to stem the growing budget deficit that they face. Competition from TV and the web is driving newspapers out of business, and the hope that something will reverse this trend is thin.

As the internet becomes the primary source of news and opinion, RSS is a natural fit as the delivery medium. You can see that all of the Seattle PI sections are available as RSS feeds, and they have an option to customize the news and features that you see on your homepage. How long before they offer your customized paper as an OPML file, allowing you to instantly replicate your web experience in a reader?

In 1995, internet email was an arcane, technical concept. I figured out that I could send mail to an Internet address using my company’s MCI Mail account. My email address was 75 characters long. RSS may seem similarly oblique today, but it’s well on the road to being a mainstream method of internet information delivery. Your partners and constituents won’t just appreciate your support for it; they’ll start to expect it. I hope that my article and these follow-ups in the blog can serve as a good starting point for understanding what RSS can do and what you might do with it.

More RSS Tools: Using Google Reader for Research and Sharing

This post was originally published on the Idealware Blog in March of 2009.

Google Reader gets a good mention in my RSS article, Using RSS Tools to Feed your Information Needs, but deserves an even deeper dive. This is a follow-up to that article, along with my recent posts on Integrating content with websites, and Managing Content with Pipes. We’ve established that an RSS Reader helps you manage internet information far more efficiently than a web browser can; and we’ve talked in the last few posts about publishing feeds to your web site. This post focuses on using tools like Google Reader to share research .

Out of the box, GReader (as it’s affectionately known) is a powerful, web-based reader that lets you subscribe, mark and share items in two significant ways. Shared Items are items that get published to a public page that you can point your friends and co-workers to. Further, this page can be subscribed to via RSS as well, so it can be republished to your web site, or integrated into a Facebook feed. Using (fake) bill 221b as an example, if you monitor for and selectively share articles related to the bill, you can easily point co-workers and constituents to your shared page, and or republish those items in places where your audience will see them.

Shared Items are also made available to other GReader users who choose to share with you. This offers a greater level of convenience for teams working with shared research; it can also afford a level of confidentiality if you don’t want to publicize a public page. Not only can you share the items you find; you can also tag them, much like you would with Delicious or Flickr, and add a note, if you have thoughts or context-setting notes to share. A function recently added GReader takes this even further – shared items can be commented on, much as a blog post can.

The last bit to add to this arsenal is a very powerful, but not terribly obvious GReader feature. The Note in GReader bookmarklet (which you can drag to your web browser’s quick links or bookmarks toolbar from the GReader “Notes” page) lets you share, with comments and tags, pages that you find on the web as GReader shared items. So if you run across something that isn’t in your feeds (and there’s plenty of web content that can’t be subscribed to), this lets you add it to your shared items.

What I’ve found is that, as much as I admire social bookmarking sites like Delicious, they become a lot less useful when I can store all of the pages that I find via RSS or browsing, with tags and an option to share them, in the same convenient place.

It’s important to note that, as powerful as all of this is, it still lacks some functionality that similar tools have. One great advantage of using Delicious as a link-sharing tool is that you can share links specific to any tag (or set of tags). Google Reader doesn’t offer multiple shared pages based on filtering criteria. And while you can add notes to your feed (without adding links), it’s not as flexible a repository as a tool like Evernote, which lets you save web pages, ODFs and all sorts of documents to a single web-based folder.

Also, Google Reader isn’t the only game in town. The Newsgator family of RSS readers offer similar sharing functions; some of which overcome the limitations above, as do other readers out there (please share your favorite in the comments).

What it boils down to, though, is that we now have powerful, integrated options for online research, as individuals, as teams, and as information agents for our constituents. The convenience of publishing as you discover is a significant advancement over earlier schemes, which usually involved either sending a lot of easily-lost links by email, or submitting your finds to a webmaster, who would then add them to a page on your site. This is a publish as you find approach that incorporates sharing and communication into the research process.

Next week, I’ll finish up the “More RSS Tools” series with a post about OPML, the way that you make your collection of feeds portable.

More RSS Tools: Managing Content with Pipes

This post first appeared on the Idealware Blog in March of 2009.

I’m continuing with follow-up topics from my RSS article, Using RSS Tools to Feed your Information Needs. Last week, I discussed integrating content with websites, and this week I’m going to dive into one of the more advanced ways to work with RSS content. This gets a little geeky, but it really shows off some of the sophistication of this technology.

The article provides numerous examples of RSS sources, but all in the form of web sites, blogs and web services that offer you one or more streams of information. If you want to narrow your view beyond the feeds available on a site, say, because you are only interested in Idealware posts about CRM tools or the ones written by Steve Backman, then you need a tool that will refine your search. Alternatively, you might want to put a section containing news stories relevant to a particular issue on your site, but want some control over the sources, as well as the subject matter. For this amount of control over the content you retrieve, you want to use something like Yahoo! Pipes.

Pipes is an RSS mashup editor. It’s a tool that looks a bit like Microsoft’s Visio, where you drag boxes onto a grid and draw relationships between them. But it’s not a layout or flowcharting tool; instead, it’s a visual mapping and filtering tool that lets you identify sources and then apply rules to those sources before merging them into an aggregated feed. To break that down, let’s say that your goal is to either monitor talk about a bill, or, maybe, to publish a section on your web site titled “What they’re saying about bill 221b” (I made that bill up). You have identified eight blogs that have good posts on the subject, and these are blogs that you trust to properly represent the issues and not, in any way, malign or confuse your efforts.

In Pipes, you can select all eight as sources, and then set up a filter to block any posts that don’t reference “221b”. The resulting RSS feed — which you can then subscribe to our republish — will isolate the posts that are relevant to the bill from your selected sources.

For example, here’s that pipe that will allow you to skip Michelle, Heather, Paul, Laura, Eric and my posts and just see Steve’s:

Picture 2.png

Another, more advanced example: You have an organizational Twitter feed that you want to republish to your site But you only want to publish your posts, not your individual replies. In Twitter, a reply is always identifiable by the very first character, which will be an “@” sign. Twitter RSS items arrive in the format “yourtwitterid: tweet”, so any reply will start with “yourtwitterid: @”. Setting up a Yahoo Pipe filter to block any result with “: @” in the text will isolate your posts from the replies. You can add a “Regex” (e.g. Search/Replace) command to replace “yourtwittername:” with nothing in order to publish just the tweet. The pipe will look like this:

Picture 1.png

If you play with Pipes (Yahoo! ID required, otherwise free), I highly recommend starting with an example like mine or this one by Gina Trapani to get the feel of it. Save your pipe, and you can subscribe to it — it updates automatically, and you don’t have to make it public for it to work.

Google has it’s competing Google Mashups tool in private beta, and similar tools are popping up all over the web. I talk a lot about how RSS is the technology that allows us to manage the information on the web. Pipes let us refine it. It’s great stuff.

Look for more RSS talk on OPML files and Google Reader in my upcoming posts.