Tag Archives: Idealware Blog

Happy 10th Anniversary!

Cyber-cafeJust a quick post to commemorate ten years of blogging here at Techcafeteria.  That’s 268 entries, averaging to 22 posts per year, or damn close to two posts a month, which is not too shabby for a guy with a family and a demanding day job. The most popular stuff all now lives in my Recommended Posts section.

The goal here has never been much more than to share what I hope is useful and insightful knowledge on how nonprofits can make good use of technology, peppered with the occasional political commentary or rant, but I try to restrain myself from posting too many of those. After my recent reformat, I think I’ve made it much easier for visitors to find the content that interests them, so if you’re one of my many RSS subscribers, and you haven’t actually visited the site for some time, you should take a look.

I’m ever thankful to Idealware, NTEN, Techsoup, CommunityIT, and many others in the nptech community for giving me the opportunity to write for their blogs and republish here (about two thirds of the content, I suspect). And I’m happy to be part of this global, giving community.

Here’s to the next ten years!

My Foray Into Personal Fundraising

This article was first published on the Idealware Blog in December of 2011.

My work planning for, evaluating and deploying technology at nonprofits requires that I have a good understanding of fundraising concepts and practices, and I do.  It’s an area that I’m sufficiently knowledgeable about, but no expert. So my current personal fundraising campaign for Idealware is an amateur effort. It is, happily, a successful one. I did some things right, including, I think, making strategic use of my social networking connections and channels.

I might have done a few things differently, given what I’ve learned.  And much of the success has been instructive.

Setting Up The Campaign

As both a board member and an ardent supporter of Idealware, I give annually and encourage my friends to do the same.  But this year I wanted to step it up, so I suggested that we use Razoo, an online personal fundraising platform, to host campaigns.  It turned out that I was behind the times — fellow board member Steve Bachman had already started a Razoo campaign, and Idealware had registered as a Razoo charity.

I signed up for my Razoo account, and clicked the “Fundraise” link.  Setting up the campaign was pretty akin to setting up a profile on a social network — name, description, graphic upload, etc.  I went for not too fancy with the name and graphic (“The Idealware Research Fund” and the logo, respectively), and set about to write as plain and honest a description/appeal as I could, approaching it as what I would say if I asked you to donate to Idealware and you said “Why?”.

I set a modest goal of $750, and announced my intention to match half of that.  I was a little cagey about the matching requirements, saying that I would match up to $375 when I had already pledged that amount to Idealware.  My expectation, going in, was that I could probably raise $375 and my match would bring me to goal.  So I’m happy that, as of this writing, I’ve raised $750 and added my donation to that, well exceeding the goal.

Campaigning

My campaign targets were my social media contacts.  To that end, I downloaded an Excel spreadsheet of all 530 of my LinkedIn connections and pared it down to the 325 or so that met this criteria: they were either familiar with Idealware and supportive of the work or, maybe unfamiliar, but likely would support it.  I didn’t target my staff and co-workers, and I left out some family and non-professional connections that I didn’t imagine would be all that personally motivated by Idealware’s work.  But I left a bunch of them in, too.

I wanted the appeal to clearly come from me, so I didn’t send the appeal through LinkedIn.  I used my personal email. I wanted to avoid spam filters, so the email was plain text, and I sent it in batches of ten people at a time, cutting and pasting from the spreadsheet to Gmail’s “to” field, which was nice enough to automagically format them with commas between each email address.  The mailing process, from LinkedIn download to final click of the “Send” button, took about four hours.

I made it clear up front in my email that the recipients were LinkedIn contacts of mine.  I’m sensitive to spam, even for worthwhile causes, and I wanted everyone to know that this wasn’t a random email, nor was it a list that would be used again.  Next campaign, I’ll start from scratch again.

With the emails sent, I tweeted, Facebooked, and Google+ed the effort.

Follow-up

I got a healthy response to my email blast, raising $500 in a couple of days.  It was great to also get emails from friends who passed on donating to my campaign because they’d already donated directly, or through another campaign. As donations came in, I tweeted and posted thanks to the donors on my Facebook page. The tweets included a link back to the campaign, of course.  A week and a half in, I posted new tweets and statuses and that, too, got a good response.  At $80 to goal, I tweeted how close we were, and longtime Idealware contributor and advisor Michael Stein jumped in and brought us to $750, at which point I added my $375.

Takeaways

I think my key successes were in keeping it human, relatively low-key (no follow-up emails or persistent nagging, but between the public thank yous and a ten day social media reminder, a fairly consistent broadcast); and having the benefit of supporting a cause that’s pretty unimpeachable.

I’m pretty sure that sending more personalized emails and making phone calls would have yielded more funding.  Next time, I might trim the number of people I reach out to personally, but increase the personal nature of the appeal.

25 of my 26 of my donations came from people who were already familiar with Idealware (one was from someone who works here!). I’m sure all 25 of them have been to one or more NTEN conferences. I had little luck convincing people new to the cause to donate.  Some of my fellow board members are focusing on family and other associates, and it’s a harder sell.  I think that’s somewhat understandable.  We all support causes that are important to us, and Idealware is going to appeal to either sympatico types like myself (I was on board with Idealware’s mission before Laura set up shop) and people who have directly benefitted.

For myself, I regularly support Idealware and orgs like them, my own employer (because the earth really does need a good lawyer!), and a collection of causes that have missions that really resonate with me, as well as reputations that hold up.  But it’s a fraction of the orgs that I would contribute to if I had more to afford. Who we pony up the checks for is a very personal matter.  I’m thrilled that a significant percentage of the people that I appealed to heeded the call, and it speaks to the great work that Idealware does. But I fault no one that I appealed to, as I’m certain that the ones who passed up my cause have worthwhile causes of their own.

All that said, if you want to help out Idealware, you can do so via the red button above, or via my campaign at Razoo, which runs through December 31st.

Is It Time To Worry About Cybercrime?

This article was originally posted on the Idealware Blog in September of 2011.

For the past decade, the bulk of unlawful web-based activities have been profit-motivated: phishing, spam, “Nigerian” money scams, and hacking to get credit cards. This year has seen a rise in politically motivated crimes, most widely exemplified by the loosely-knit group of hackers known as “Anonymous“.  Anonymous hackers attack the websites of organizations, be they government, corporate or otherwise that they deem to be repressive or unethical.  In addition to defacing the sites, they’ve also routinely exposed confidential user information, such as login names, passwords and addresses.  If we are now entering the age where political cybercrime is commonplace, what does that mean for nonprofits?  How can we defend oursleves when we already struggle with basic security on tight budgets and limited resources?

Two high profile victims were Sony, the gigantic electronics and entertainment conglomerate, and BART, the Bay Area Rapid Transit commuter service.

  • Sony was initially a target for Anonymous after they took legal action against a computer geek named George Holtz, who figured out how to reprogram a Playstation game device in order to play blocked third-party games on it.  This violated the Sony license, but the hacking and gaming communities felt that the license restriction wasn’t very fair in the first place. They considered the action against Holtz unwarranted and severe.  Sony also, famously, installed a hacker’s rootkit, themselves, on a number of music CDs with interactive computer features, and were sued for that crime.,  Could it be that the hackers were particularly annoyed that this mega-corporation will stoop to their tactics, but sue them for similar actions?
  • BART was targeted for more visceral actions.  Their internal police force shot Oscar Grant, an unarmed youth, in the back a few years ago, and then, again, recently, fired on a homeless man holding a knife, killing him. These actions drew the attention of the community and resulted in protests, some violent.  But BART only drew the attention of Anonymous when they took the step of blocking cell phone service at their four downtown San Francisco stations in order to quell communication about a planned protest.  This action is under investigation by the FCC and has been decried by the ACLU; it was quite likely illegal. Then it was revealed that, at a press conference to discuss the protests, they seeded the audience with BART proponents coached in what to ask and say.

Anonymous hacked a dozen or more Sony Websites and three BART websites in protest/retaliation for what they consider to be corporate crime. Here’s how easy it was for them: one of the Sony servers containing hundreds of thousands of user account records was running on an old, unpatched version of Apache with no encryption. The initial attack was simply accomplished using a hack (SQL Injection) that is ridiculously easy to block (by updating to a current software version, in most cases). The Administrator password to get into the BART police site was “admin123”.  The “hacker” who broke into that site reported that she’d never hacked a web site in her life, she just did a bit of googling and got right in.

These were corporate web sites, run by companies that take in vast amounts of consumer dollars every day, and they couldn’t be bothered to do even the minimum amount of safeguarding of their customer’s data.  They might not be the criminals, but is it wild to suggest that they were criminally negligent? This isn’t a matter of them not having the money, resources or available expertise to protect our data.  It was a matter of them not taking the responsibility to protect it.

What can nonprofit organizations, that aren’t obsessed with bottom lines, do to avoid the problems that BART and Sony have faced?

  • First and foremost, we need to protect constituent data.  If your NPO doesn’t have the weherewithal to do that internally, than your online data should be hosted with companies that have strong commitments to security and privacy of customer data.
  • Second, should breaches occur (and they do), your primary goal should be timely, open communication with the victims of the data breach.  We’re getting past the point where our constituents are naive about all of this (Sony has done a great job of prepping them for us).  So your first response to exposed constituent data should be to tell the constituents exacty what was exposed.
  • One uncomfortable situation like this won’t kill your credibility, but a history of bad or callous relationships will amplify it.  This is one of the reasons why good social media policies are critical — the people who can support or sink you when something like a data breach occurs are on Twitter and Facebook, and they’ll feed the media stream with support or slander, depending on how well you relate to them.
  • We promote causes online, but we admit faults there, too.  We don’t engage customers by lying to them, hiding things that impact them, or dictating the terms of our relationships with them.
  • Our supporters are people, and they have their motivations for supporting us (or not) and their ideas about how they should be doing it.  Their motivations and reasoning might be quite different from what we assume. Accordingly, we should be basing our assumptions — and campaigns — on the best feedback that we can coax out of them.  Long-held industry assumptions are suspect simply because they’re long-held, in a world where technology, and how we interact with it, is constantly changing.

 

If we ever needed reverse primers in how to manage constituent relationships, the Sony and BART fiascos are prime ones.  They are victims of illegal and unethical behaviour.  But by viewing their customers and constituents as threats, with callous regard for the people who keep them in business in the first place, they’ve created a public relationship that did nothing to stem the attacks. Sony has put far more money and effort into attacking and dehumanizing their customers with lawsuits and invasive, annoying copyright protection schemes than they have in listening, or trying to understand the needs and desires of their constituents.  BART has tried to block their ears so tightly to shut out public criticism of their violent, shoot first police force that they’ve crossed constitutional lines of conduct. We — nonprofits — know better. It’s a two way relationship, not a dictatorial relationship with our supporters, that will serve as our most effective firewall.

Do Nonprofits Spam?

This article was first published on the Idealware Blog in March of 2011.

Supporters at the gates

NPTech maven Deborah Elizabeth Finn started a blog last week called “No Nonprofit Spam“.  As a well-known NPTech consultant, Deborah is far from alone in finding herself regularly subscribed to nonprofit email lists that she has never opted into.  But, as opposed to just complaining about what is, in anyone’s definition (except possibly the sender’s) unsolicited commercial email; Deborah took the opportunity to try and educate.  It’s a controversial undertaking. Nobody likes spam.  Many of us like nonprofits, and aren’t going to hold them to the same level of criticism as we will that anonymous meds or mortgages dealer; and the measures that we take against the seamy spammers are pretty harsh.  Even if nonprofits are guilty of the spamming crime, should they be subject to the same punishments?

Spam, like beauty, is in the eye of the beholder. So, for the purposes of this conversation, let’s agree on a definition of nonprofit spam. Sending one email to someone that you have identified as a potential constituent, either by engaging them in other media or purchasing their name from a list provider, is, at worst, borderline spam, and not something that I would join a campaign to complain about.  If I delete the message and don’t hear from the NPO again, no big deal.  But subscribing me to a recurring list without my express buy-in is what I consider spamming.  And that’s the focus of Deborah’s blog (which is naming names) and the action that goes from email engagement to email abuse, for the purposes of this post.

In my post to the No Nonprofit Spam website, I made the point that we’re all inundated with email and we can only support so many orgs, so NPOs would do better to build their web site and their Charity Navigator rating than to push their messages, uninvited, into our inboxes. It’s a matter of being respectful of constituent priorities.

There are two motivations for overdoing it on the emails. One is the mildly understandable, but not really forgiveable mistake of overenthusiasm for one’s mission.  Believing that the work you do is so important that subscribing people who have expressed no interest to your list is warranted.  That’s a mistake of naivety more than anything else.

The less forgivable excuse is the typical spam calculation: no matter how many people you offend, enough people will click on it to justify the excess.  After all, it’s cost-justified by the response rate, right?

The downside in both cases is that, if you only count the constituents you gained, then you’re missing something of great important to nonprofits and little import to viagra salesman.  The people you offended might have otherwise been supporters. The viagra spammer isn’t going to pitch their product through other avenues.  It’s a low investment, so any yeild is great gain.  But you likely have people devoting their full hearts to your cause.  You’re in the business of building relationships, not burning them.  And you will never know how many consttuents that you might have gained through more respectful avenues if you treat them callously with your email initiatives.

Worse, the standard ways that individuals deal with spam could be very challenging for an NPO to deal with.  In the comments to my No Nonprofit Spam post, some people advocated doing more than just marking the messages as spam, but also reporting the offending orgs to Spamcop, who then list them with Spamhaus, the organization that maintains block lists of known spammers that large ISPs subscribe to.  By overstepping the bounds of net courtesy, you could not only alienate individuals, but wreak havoc with your ability to reach people by email at all.  My take is that reporting NPOs — even the ones who, by my above definition, spam — is unusually cruel to organizations who do good in the world.  But I’m a nonprofit professional. Many of the people that we might be offending aren’t going to be so sympathetic.

So, what do you think? Is spam from a nonprofit any different from spam from a commercial vendor?  Should nonprofits be held to the same level of accountability as viagra spammers? Are even single unsolicited emails spam, or are they permissable? I searched for some nonprofit-focused best practices before completing this article, and didn’t come up with anything that differentiated our industry from the commercial ones, but I think there’s a difference. Just as nonprofits are exempt from the Do Not Call lists, I think we deserve some exemptions in email.  But I could be wrong, and what would serve us all well is a clear community policy on email engagement.  Does anyone have any to recommend?

Cartoon borrowed from Rob Cottingham’s Noise To Signal collection.

Accidental Technology

This article was originally published on the Idealware Blog in February of 2011.

There’s been a ton of talk over at the NTEN Blog this month about Accidental Techies.  I had a few thoughts on the phenomenon.

If you don’t know, Accidental Techie is an endearing and/or self effacing term for someone who signed up for a clerical, administrative or other general purpose position and wound up doing technical work.  Many full-blown techies start their careers accidentally like this.

The NTEN discussion has wonderfully run the gamut.  Robert Weiner, a well-known NPTech consultant, started things rolling with “Going From Accidental Techie To Technology Leader“, a piece that wonderfully explores the gaps between those who do the tech because nobody else is and those who have the seat at the planning table, providing good advice on how you get to that table.

David Geilhufe then jumped in from an entirely different perspective with “Professionalism in Nonprofit Technology: Should My Techies be Accidental?” — that of a software grant provider who has seen how difficult it is to deal with organizations that don’t have seasoned technology practitioners in place. While his piece wasn’t a screed against accidental techies (ATs), it threw a bit of cold water on any org that thinks that technology can be successful without professional input and planning.

Fellow Idealware blogger and nptech consultant Johanna Bates posted “A Rant About Accidental Techies“. Her post, based in part on her own AT origins,  is full of insight on how the ‘accidental” appellation can be a crutch, She also shines light on the sexual politics of accidental techieism (reflected, unsurprisingly, in NTEN’s bloggers, two of whom are male, non-ATs, and two are female former ATs).

And Judi Sohn wrote “An Ode To The Accidental Techie“, reflecting on her experience as one (as well as VP of her org!) and reflecting on the attributes that make Accidental Techies great.

I am not, and never was an Accidental Techie, although my career path was very similar.  I started doing tech work in a small law firm where my title was “Mailroom Supervisor” and my duties included everything from database maintenance to filing to reception. We had a part-time tech who had installed a five node, token-ring IBM LAN that the legal secretaries, one attorney and I shared. When he quit, I was offered the Network Admin promotion and  a hefty pay raise.  The difference here is that, like a lot of ATs, I was in a clerical position and I had an aptitude for technology.  But, unlike an AT — and this is my big point — I worked for people that anticipated the needs for technology management and support.

There is nothing wrong with Accidental Techies; quite the contrary: they tend to be people who are sharp, versatille, sensitive both to organizational needs and the opportunities to create organizational efficiencies.  Most of all, they’re generous with their knowledge and time. But there’s something wrong if the technical work they do is unheralded and unpaid.  It’s wrong if it isn’t in their title and job descriptions.  The circumstances that create accidental techies, instead of promoting people with those traits to tech positions, are routinely those where management doesn’t have a clue as to how dependent on technology they actually are, or what resources they need to support it.

And you can bet that, in a business environment that creates the conditions for Accidental Techies to flourish, there’s no technology plan.  There’s no CIO, IT Director, or person who sits on the planning  and budget committee whose job is to properly fund and deploy computer and software systems. They’re winging it with infrastructure that can make or break an organization.  And they’re extremely lucky to have proactive people on staff who do see the gap and are breaking their backs to fill it.

So the NTEN blog quartet is required reading for anyone who even suspects that they might be an Accidental Techie. Read Johanna’s first, because she cuts to some core assessments about who you are and why you might be in this role.  Read David’s next, because it’s harsh but true, and it illustrates well the dangers that your org is facing if they don’t have proper IT oversight baked into their system.  Read Judy’s third, because she’ll remind you that, despite the last two reads, it’s still cool — and you’re cool for being someone with heart and talent.  And read Robert’s last, because he’ll tell you how to get from where you are to where you and your organization should be.

Delicious Memories

This article was originally published on the Idealware Blog in December of 2010.

Like many of my NPTECH peers, I was dismayed to learn yesterday that Delicious, the social bookmarking service, was being put to pasture by Yahoo!, the big company that purchased the startup five years ago.  Marshall Kirkpatrick of ReadWriteWeb has written the best memorial,  But the demise of Delicious marks a passing of significant note to our community of nonprofit staff that seek innovative uses of technology.  So let me talk quickly about how Delicious brought me into this community, and, along the way, a bit about what it meant to all of us.

In 2002, I was wrapped up in my job as VP of Information Technology at San Franciscco Goodwill.  At that time, the buzz term was “Web 2.0”, and it was all over the tech press with about a thousand definitions.  We all knew that “Web 2.0” meant the evolution of the web from a straight publisher to consumer distribution medium to something more interactive, but nobody knew exactly what. Around that time, I started reading columns by Jon Udell about RSS, technology that would, as a simpler, subset of XML, helps us share web-based information the way that newspapers share syndicated content, such as comic strips and columns.  I was really intrigued.  The early adopters of RSS were bloggers, and what I think was very cool about this is that RSS was free technology that, like the web, advanced the opportunities of penniless mortals to become global publishers.  People who couldn’t tell an XML feed from an XL T-Shirt were championing an open standard, because it served as the megaphone in front of their soapboxes.

I kept my eye out for innovative uses of RSS,a nd quickly discovered Joshua Schacter’s del.icio.us website.  This was a social bookmarking service where, by adding a little javascript link to your web browsers bookmark bar (or quick links, or whatever), you could quickly save any web page you enjoyed to an online repository for later retrieval.  That repository was public, so others could see what you found valuable as well.  But this is where Schacter jumped the gun, and championed two information technology strategies that have, since that time, significantly changed the web: tagging and rss.

Tagging

In addition to the link and a brief description, you could add keywords to each bookmark, and then later find related bookmarks by that keyword.  You could just find the bookmarks that you tagged with a word, or you could find the tags that anyone using Delicious tagged with that word.  So, if you were studying the russian revolution, you could search Delicious for russia+revolution and find every bookmark that anyone had saved,   This was different than searching for the same terms in Google or yahoo, because the results weren’t just the most read; they were the sites that were meaningful enough to people to actually be saved.  Delicious became, as Kirkpatrick points out,  a mass-curated collection of valuable information, more like wikipedia than, say, Yahoo Directory.  Delicious was the lending library of the web.

RSS

In addition to searching the site for tags by keyword and/or user, any results your searching found could be subscribed to via RSS.  This was crazy powerful! Not only could you follow topics of interest, but, using PHP add-ons like MagpieRSS or aggregation functions like those built into Drupal, Joomla, and pretty much any major Content Management System, you could quickly incorporate valuable, easily updated content into your website.  I immediately replaced my static “Links” page on my website to one that grabbed items witha  particular keyword from Delicious, so that updating that Links page was as easy as bookmarking a site that I wanted listed there.

NPTECH

I wasn’t the only nonprofit strategist taking note of these developments.  One day, while browsing items that Delicious termed Popular (e.g., bookmarks that multiple people had saved to the site), I noted a blog entry titled “The Ten Reasons Nonprofits Should Use RSS“.  The article was written by one Marnie Webb of CompuMentor (now better known as TechSoup, where she is one of the CEOs).  A week or so later, while following the office email mailing lis for Delicious, I encountered Marnie again, and, this time, emailed her and suggested that we meet for lunch, based on our clearly common interest in nonprofits and RSS.  Marnie told me about the NPTech Tagging Project, and effort she started by simply telling her friends to tag websites related to nonprofit technology with the tag “nptech” on Delicious, so that we could all subscribe to that tag in our RSS readers.

Marnie and I believe that what we started was the first mass information referral system of this type.  In 2005 we took it up a level by creating the nptech.info website, which aggregates items tagged with nptech from Delicious, Twitter, Flicker and numerous other sources across the web. Nptech.info is now more widely read via it’s Twitter feed, @nptechinfo.

I think it’s safe to say that the nptech tagging project grew from a cool and useful idea and practice into a community, and a way that many of us identify who we are to the world.  I’m a lot of things, but nptechie sums most of them up into one simple word.  I know that many of you identify yourselves that way as well.

An offshoot of meeting Marnie on the Delicious mailing list was that she introduced me to NTEN, and brought me into the broad community of nptech, and my current status as a blogger, writer, presenter, Idealware board member and happy member of this broad community ties directly back to the Delicious website.  I stopped using the site as a bookmarking service some time ago, as efforts that it inspired (like Google Reader sharing)  became more convenient.  But I still subscribe to Delicious feeds and use it in websites.  It’s demise will likely be the the end of nptech,info.  Efforts are underway to save it, so we’ll see.  But even if this article is the first you’ve heard of Delicious, it’s important to know that it played a role in the evolution of nonprofit technology as the arbiter of all things nptech.  It’s ingenuity and utility will be sorely missed.

Tech Tips From The Nonprofit Technology Conference

This article was first published on the Idealware Blog in May of 2010.

Last month, I reported on the first annual Tech Track, a series of sessions presented at the April, 2010 Nonprofit Technology Conference. In that post I listed the topics covered in the five session track. Today I want to discuss some of the answers that the group came up with.

Session 1: Working Without a Wire

This session covered wireless technologies, from cell phones to laptops. Some conclusions:

The state of wireless is still not 100%, but it’s better than it was last year and it’s still improving Major metropolitan areas are well covered; remote areas (like Wyoming) are not. There are alternatives, such as Satellite, but that still requires that your location be in unobstructed satellite range. All in all, we can’t assume that wireless access is a given, and the challenge is more about managing staff expectations than installing all of the wireless by ourselves. It will get there.
Wireless security options are improving. Virtual Private Networks (VPNs), remote access solutions (such as Citrix, VNC andTerminal Services) are being provided for more devices and platforms, and the major smartphone companies are supporting enterprise features like remote device wipes.
Policy-wise, more orgs are moving to a module where staff buy their own smartphones and the companies reimburse a portion of the bill to cover business use. Some companies set strict password policies for accessing office content; others don’t.

Session 2: Proper Plumbing

This session was pitched as covering virtualization and other server room technologies, but when we quizzed the participants, virtualization was at the top of their list, so that’s what we focused on.

We established that virtualizing servers is a recommended practice. If you have a consultant recommending it and you don’t trust their recommendation, find another consultant and have them virtualize your systems, because the recommendation is a good one, but it’s a problem that you don’t trust your consultant!
The benefits of virtualization are numerous — reduced budgets, reduced carbon footprints, instant testing environments, 24/7 availability (if you can upgrade a copy of a server and then switch it back live, an advanced virtualization function).
There’s no need to rush it — it’s easier on the budget and the staff, as well as the environment, to replace standalone servers with virtualized ones as the hardware fails.
On the planning side, bigger networks do better by moving all of their data to a Storage Area Network (SAN) before virtualizing. This allows for even more flexibility and reduced costs, as servers are strictly operating systems with software and data is stored on fast, redundant disk arrays that can be accessed by any server, virtual or otherwise.

Session 3: Earth to Cloud

The cloud computing session focused a lot on comparisons. While the general concern is that hosting data with a third party is risky, is it any more risky than hosting it on our own systems? Which approach is more expensive? Which affords the most freedom to work with our data and integrate systems? How do we manage disaster recovery and business continuity in each scenario?

Security – Everyone is hackable, and Google and Salesforce have a lot more expertise in securing data systems than we do. So, from a “is your data safe?” perspective, it’s at least a wash. But if you have sensitive client data that needs to be protected from subpoenas, as well as or more than hackers, than you might be safer hosting your own systems.
Cost – We had no final answers; it will vary from vendor to vendor. But the cost calculation needs to figure in more than dollars spent — staff time managing systems is another big expense of technology.
Integration and Data Management – Systems don’t have to be in the same room to be integrated; they have to have robustAPIs. And internal systems can be just as locked as external if your contract with the vendor doesn’t give you full access and control over your data. This, again, was a wash.
Risk Management – There’s a definite risk involved if your outsourced host goes out of business. But there are advantages to being hosted, as many providers offer multiply-redundant systems. Google, in particular, writes every save on a Google Doc or GMail to two separate server farms on two different continents.
It all boils down to assessing the maturity of the vendors and negotiating contracts carefully, to cover all of the risks. Don’t sign up with the guy who hosts his servers from his basement; and have a detailed continuity plan in place should the vendor close up shop.
 If you’re a small org (15 staff or less), it’s almost a no-brainer that it will be more cost-effective and safer to host your email and data in the cloud, as opposed to running our own complex CRMs and Exchange servers. If you’re a large org, it might be much more complex, as larger enterprise apps sometimes depend on that Exchange server being in place. But, all in all, Cloud computing is a viable option that might be a good fit for you — check it out, thoroughly.

I’ll finish this thread up with one more post on budgeting and change management in the next few weeks.

How Google Can Kick Facebook’s Butt

This article was first published on the Idealware Blog in May of 2010.

infrastructures.png

(XKCD Cartoon by Randall Munroe)

Facebook really annoyed a lot of people with their recent, heavy-handed moves.  You can read about this all over the place, here are some good links about what they’ve done, what you should do and why it bothers some of us:

Facebook’s Announcement (from their Blog)

Understanding the Open Graph from Chris Messina

Mark Zuckerberg’s claim that internet privacy is “over” from Marshall Kirkpatrick at ReadWriteWeb

Three Ways Facebook Will Dramatically Change Your Nonprofit (from John Hayden)

Why I Don’t “Like” Facebook and Void Rage: Unable To Muster Facebook Anger from Techcafeteria

Why You Shouldn’t Delete Your Facebook Account by Janet Fouts

Facebook and “Radical Transparency” (A Rant) by Danah Boyd

Long story short, though, Facebook wants us all to open up, and they want the web to be a place where you do things and report back to Facebook about them.  My take on this is that Im in favor of an open web that offers a rich, social experience with lots of referred information.  I don’t consider Facebook an acceptable platform or steward of that function.

Why Google?

As my colleague Johanna pointed out, there’s already an effort underway to develop a purely open alternative to Facebook. The Diaspora project has received significant funding and seems to be run by some very thoughtful, intelligent people.  But I look at this as a kind of David and Goliath proposition, with the rider that this Goliath won’t even blink if David hurls a rock at him.  If someone is going to displace Facebook, it’s not likely going to be a tiny startup with a couple of $100k.  It’s going to be Google.

You might ask me, isn;t this just trading one corporate overseer for another? And the answer is yes.  But Google’s guiding principle is “Don’t be Evil“. Facebook’s, apparently, is “milk your users for every penny their personal data can net you“.  If someone’s going to capitalize on my interactions with friends, family and the world, I’d rather it be the corporation that has demonstrated some ethics in their business decisions to the one that has almost blatantly said that they don’t care about their users.

Supplementing Buzz

So, how can Google play Indiana Jones to the rolling boulder that is Facebook? Not by just pushing Buzz.  I’ll get to Buzz in a minute, because I’m a fanboy of the platform.  But Buzz alone isn’t a Facebook killer, and Google won’t have a foothold unless they take a couple of their afterthought properties and push them front and center.

Big Google Product: GMail. Afterthought that supports it: Contacts.

Google needs to do some heavy re-imagining of their contact management app if they want to gain a foothold against Facebook. Facebook’s contact management is simple and elegant; Google’s looks like a web app that I might have developed.  They need to get some of the good UI people lurking among the geeks to do an overhaul, stat, adding features like social media site integration (ala Rapportive or Gist) and more ajaxy, seamless ways to create and manage people and groups.

Big Google Product: Buzz. Afterthought that supports it: Google Profiles.

Social networking is all about the profile; why doesn’t Google get that?  Buzz isn’t the home page; the profile is, and what Google has provided for us is cute, simplistic, and far too limited to meet our needs.  But the customization options for the current profile are limited, and the whole thing just feels lazy on Google’s part, as if they spent a half hour designing it and then dumped it on us.

Why Buzz Rocks

I’ve written about Buzz before; more to this point on my other blog.  Google Buzz supports about 90% of the basic features of a full-fledged blogging platform like WordPress or Blogger:

  • I can write a post with images.
  • Commenting, with some commenting moderation, is in place.
  • You can subscribe to my Buzz feed as an individual RSS feed, or just visit it on my profile.
  • But, unlike this blog, my Buzz posts are also subscribable in the Buzz news feed interface, like Twitter or Facebook, making it all the richer in terms of how people can reply and interact.  That’s pretty powerful.
  • Buzz supports groups (via Contacts) and private posts.
  • Google just announced (like, yesterday) an API that will allow people to develop apps that interact with and run on the Buzz platform.
  • And, of course, Buzz integrates right into my email, keeping it front and center, and convenient.

Tying It All Together

Google could make this a powerful alternative to Facebook by doing a few simple things:

  • Almost everyone I know who gave Buzz a try instantly ported in their Twitter feed and then forgot about it, leaving those of us who like Buzz left to sift through all of that stuff that, hey, we’ve already read, because we haven’t left Twitter. So, Google should lose the universal feed feature. Keep it about the value of the conversation, not the volume level.
  • But keep the Google Reader integration, along with link, picture and video posts.  A good blog comments on other web content, not other web feeds, and the integration of Google Reader as a content source works.  One reason it works is because you can post the Google Reader items with comments.
  • Make the profile page more configurable and dynamic, allowing users to add tabs and link them to RSS sources, much the way we add content to the sidebars of our blogs.  This is how my twitter feed should be integrated, not interspersed with my Buzz posts.
  • Make Contacts a tab on the profile page.
  • Add theming to the profile page.  Emulate the Blogger theming options.
  • I own a domain with my name on it, and I would point that domain to my profile page and make Buzz my blog if I had the ability to make that profile a page that I could call my own.

Conclusion

As much as I’d appreciate an open web, not a corporate owned one, I’m just not idealistic enough to believe that it’s still a possibility. If i have a choice of corporate overlords, I want the one that open sources most of their software; maintains high ethical standards for how their ads are displayed; has a track record of corporate philanthropy; and is relatively respectful of the fact that my friends and information belongs to me. That’s not Facebook. Please do weigh in on whether I’m too cynical or too trusting of the alternative, because this is an important topic. The future of the web depends on who we trust to steward our interactions.

What’s Up With The TechSoup Global/GuideStar International Merger?

This article was first published on the Idealware Blog in April of 2010.

TechSoup/GuideStar Int'l Logos

TechSoup Global (TSG) mergedwith GuideStar International (GSI) last week. Idealware readers are likely well-familiar with TechSoup, formerly CompuMentor, a nonprofit that supports other nonprofits, most notably through their TechSoup Stock software (and hardware) donation program, but also via countless projects and initiatives over the last 24 years. GuideStar International is an organization based in London that also works to support nonprofits by reporting on their efforts and promoting their missions to potential donors and supporters.

I spoke with Rebecca Masisak and Marnie Webb, two of the three CEOs of TechSoup Global (Daniel Ben-Horin is the founder and third CEO), in hopes of making this merger easier for all of us to understand. What I walked away with was not only context for the merger, but also a greater understanding of TechSoup’s expanded mission.

Which GuideStar was that?

One of the confusing things about the merger is that, if you digested the news quickly, you might be under the impression that TechSoup is merging with the GuideStar that we in the U. S. are well acquainted with. That isn’t the case. GuideStar International is a completely separate entity from GuideStar US, but with some mutual characteristics:

  • Both organizations were originally founded by Buzz Schmidt, the current President of GuideStar International;
  • They share a name and some agreements as to branding;
  • They both report on the efforts of charitable organizations, commonly referred to as nonprofits (NPOs) in the U.S.; Civil Society Organizations (CSOs) in the U.K.; or Non Governmental Organizations (NGOs) across the world.

Will this merger change the mission of TechSoup?

TechSoup Global’s mission is working toward a time when every nonprofit and NGO on the planet has the technology resources and knowledge they need to operate at their full potential.

GuideStar International seeks to illuminate the work of every civil society organisation (CSO) in the world.

Per Rebecca, TechSoup’s mission has been evolving internally for some time. The recent name change from TechSoup to TechSoup Global is a clear indicator of their ambition to expand their effectiveness beyond the U.S. borders, and efforts like NGOSource, which helps U.S. Foundations identify worthy organizations across the globe to fund, show a broadening of their traditional model of coordinating corporate donors with nonprofits.

Unlikely Alliances

TechSoup opened their Fundacja TechSoup office in Warsaw, Poland two years ago, in order to better support their European partners and the NGO’s there. They currently work with 32 partners outside of the United States. The incorporation of GSI’s London headquarters strengthens their European base of operations, as well as their ties to CSOs, as both TechSoup and GSI have many established relationships. GSI maintains an extensive database, and TechSoup sees great potential in merging their strength, as builders of relationships between entities both inside and outside of the nonprofit community, with a comprehensive database of organization and missions.

This will allow them, as Rebecca puts it, to leverage an “unlikely alliance” of partners from the nonprofit/non-governmental groups, corporate world, funders and donors, and collaborative partners (such as Idealware) to educate and provide resources to worthwhile organizations.

Repeatable Practices

After Rebecca provided this context of TSG’s mission and GSI’s suitability as an integrated partner, Marnie unleashed the real potential payload. The goal, right in line with TSG’s mission, is to assist CSOs across the globe in the task of mastering technology in service to their missions. But it’s also to take the practices that work and recreate them. With a knowledge base of organizations and technology strategies, TechSoup is looking to grow external support for the organizations they serve by increasing and reporting on their effectiveness. Identify the organizations, get them resources, and expose what works.

All in all, I’m inspired by TSG’s expanded and ambitious goals, and look forward to seeing the great things that are likely to come out of this merger.

Adventures In Web Site Migration

This post was first published on the Idealware Blog in April of 2010.

I recently took on the project of migrating the Idealware articles and blog from their old homes on Idealware’s prior web site and Google’s Blogger service to our shiny, new, Drupal-based home. This was an interesting data-migration challenge. The Idealware articles were static HTML web pages that needed to be put in Drupal’s content database. And there is no utility that imports Blogger blogs to Drupal. Both projects required research and creativity.

The first step in any data migration project is to determine if automating the task will be more work than just doing it by hand. Idealware has about 220 articles published; cutting and pasting the text into Drupal, and then cleaning up the formatting, would be a grueling project for someone. On the other hand, automating the process was not a slam dunk. Database data is easier to write conversion processes for than free form text. HTML is somewhere in the middle, with HTML codes that identify sections, but lots of free form data as well.

Converting HTML Articles with Regular Expressions

My toolkit (of choice) for this project was Sed, the Unix Stream Editor, and a generic installation of Drupal. Sed does regular expression searching and replacing. So I wrote a script that:

  1. Deleted lines with HTML tags that we didn’t need;
  2. stored data between title and body tags;
  3. and converted those items to SQL code that would insert the title and article text into my Drupal database.

This was the best I could do: other standardized information, such as author and publishing date, was not standardized in the text, so I left calling those out for a clean-up phase that the Idealware staff took on. The project was a success, in it that it took less than two days to complete the conversion. It was never going to be an easy one.

Without going too far, the sed command to delete, say, a “META” tag is:

/\<meta/d

That says to search for a literal “less than” bracket (the forward slash implies literal) and the text meta and delete any line that contains it. A tricky part of the cleanup was to make sure that my search phrases weren’t ones that might also match article text.

Once I’d stripped the file down to just the data between the “title” and “body” tags, I issued this command:

s/\<title\>(.*)\<\/title\>.*\<body\>(.*)\<\/body\>/insert into articles (title, body) values (‘\1’, ‘\2’);/

This searches for the text between HTML “title” tags, storing it in variable 1, then the text between “body” tags, storing it in variable 2, then substitutes the variable data into a simple SQL insert statement in the replacement string. Iterating a script with all of the clean-up commands, culminating in that last command, gave me a text file that could be imported into the Drupal database. The remaining cleanup was done in Drupal’s WYSIWYG interface.

Blog Conversion

As I said, there is no such thing as a program or module that converts a Blogger Blog into Drupal format. And our circumstance was further complicated by the fact that the Idealware Blog was in Blogger’s legacy “FTP” format, so the conversion options available were further limited.

There is an excellent module for converting WordPress blogs to Drupal, and there were options for converting a legacy Blogger blog to WordPress. So, then the question was, how well will the blog survive a double conversion? The answer was: very well! I challenge any of you to identify the one post that didn’t come through with every word and picture intact.

I had a good start for this, Matthew Saunders at the Nonprofits and Web 2.0 Blog posted this excellent guide. If you have a current Blogger blog to migrate, every step here will work. My problem was that the Idealware blog was in the old “FTP” format. Google has announced that blogs in their original publishing format must be converted by May 1st. While this fact had little or no relationship to the web site move to Drupal, it’s convenient that we made the move well in advance of that.

To prep, I installed current, vanilla copies of WordPress and Drupal at techcafeteria.com. I tracked down Google’s free blog converters. While there is no WP to Drupal converter, most other formats are covered, and I just used their web-based Blogger to WordPress tool to convert the exported Idealware blog to WP format. The conversion process prompted me to create accounts for each author.

To get from WordPress to Drupal, I installed above-mentioned WordPress-import module. As with the first import, this one also prompted me to create the authors’ Drupal accounts. It also had an option to store all images locally (which required rights to create a public-writeable folder on the Drupal server). Again, this worked very well.

With my test completed, I set about doing it all over again on the new Idealware blog. Here I had a little less flexibility. I had administrative rights in Drupal, but I didn’t have access to the server. Two challenges: The server’s file upload limit (set in both Drupal and PHP’s initialization file) was set to a smaller size than my WordPress import file. I got around this by importing it in by individual blogger, making sure to include all current and former Idealware bloggers. The second issue was in creating a folder for the images, which I asked our host and designer at Digital Loom.com to do for me.

Cleanup!

The final challenge was even stickier — the posts came across, but the URLs were in a different format than the old Blogger URLs This was a problem for the articles as well. How many sites do you think link to Idealware content out there? For this, I begged for enough server access to write and run a PHP script that renamed the current URLs to their former names — a half-successful effort, as Drupal had dramatically renamed a bunch of them. The remainder we manually altered.

All told, about two hours research time, three or four hours conversion (over a number of days) and more for the clean-up, as I wasted a lot of time trying to come up with a pure SQL command to do the URL renaming, only to eventually determine that it couldn’t be done without some scripting. A fun project, though, but I’d call it a success.

I hope this helps you out if you ever find yourself faced with a similar challenge.