Tag Archives: michelle murrain

Security and Privacy in a Web 2.0 World

This post originally appeared on the Idealware Blog in November of 2009.
A Tweet from Beth

Yes, we do Twitter requests!

To break down that tweet a bit, kanter is the well-known Beth Kanter of Beth’s blog. pearlbear is former Idealware blogger and current contributor Michelle Murrain, and Beth asked us, in the referenced blog post, to dive a bit into internet security and how it contrasts with internet privacy concerns. Michelle’s response, offers excellent and concise definitions of security and privacy as they apply to the web, and then sums up with a key distinction: security is a set of tools for protecting systems and information. The sensitivity of that data (and need for privacy) is a matter of policy. So the next question is, once you have your security systems and policies in place, what happens when the the policies are breached?

Craft a Policy that Minimizes Violations

Social media is casual media. The Web 2.0 approach is to present a true face to the world, one that interacts with the public and allows for individuals, with individual tastes and opinions, to share organizational information online. So a strict rule book and mandated wording for your talking points are not going to work.

Your online constituents expect your staff to have a shared understanding of your organization’s mission and objectives. But they also expect the CEO, the Marketing Assistant and the volunteer Receptionists to have real names (and real pictures on their profiles); their own online voices; and interests they share that go beyond the corporate script. It’s not a matter of venturing too far out of the water — in fact, that could be as much of a problem as staying too close to the prepared scripts. But the tone that works is the one of a human being sharing their commitment and excitement about the work that they (and you) do.

Expect that the message will reflect individual interpretations and biases. Manage the messaging to the key points, and make clear the areas that shouldn’t be discussed in public. Monitor the discussion, and proactively mentor (as opposed to chastising) staff who stray in ways that violate the policy, or seem capable of doing so.

The Case for Transparency

Transparency assumes that multiple voices are being heard; that honest opinions are being shared, and that organizations aren’t sweeping the negative issues under the virtual rug. Admittedly, it’s a scary idea that your staff, your constituents, and your clients should all be free to represent you. The best practice of corporate communications, for many years, was to run all messaging through Marketing/Communications experts and tightly control what was said. I see two big reasons for doing otherwise:

  • We no longer have a controlled media.

Controlled messaging worked when opening your own TV or Radio Station was prohibitively expensive. Today, YouTube, Yelp and Video Blogs are TV Stations. Twitter and Facebook Status are radio stations. The investment cost to speak your mind to a public audience has just about vanished.

  • We make more mistakes by under-communicating than we do by over-communicating.

Is the importance of hiding something worth the cost of looking like you have something to hide? At the peak of the dot com boom, I hired someone onto my staff at about $10k more (annually) than current staff in similar roles were making. An HR clerk accidentally sent the offer letter to my entire staff. The fallout was that I had meaningful talks about compensation with each of my staff; made them aware that they were getting market (or better) in a rapidly changing market, and that we were keeping pace on anniversary dates. Prior to the breach, a few of my staff had been wrongly convinced that they were underpaid in their positions. The incident only strengthened the trust between us.

The Good, the Bad, and the Messenger

Your blog should allow comments, and — short of spam, personal attacks and incivility — shouldn’t be censored. A few years ago, a former employee of my (former) org managed to register the .com extension of our domain name and put up a web site criticizing us. While the site didn’t get a lot of hits, he did manage to find other departed staff with axes to grind, and his online forum was about a 50-50 mix of people trashing us and others defending. After about a month, he went in and deleted the 50% of forum messages that spoke up for our organization, leaving the now one-sided, negative conversation intact. And that was the end of his forum; nobody ever posted there again.

There were some interesting lessons here for us. He had a lot of inside knowledge that he shared, with no concern or allegiance to our policy. And he was motivated and well-resourced to use the web to attack us, But, in the end, we didn’t see any negative impact on our organization. The truth was, it was easy to separate his bias from his “inside scoops”, and hard to paint us in a very negative light, because the skeletons that he let out of our closet were a lot like anybody else’s.

What this proves is that message delivery accounts for the messenger. Good and bad tweets and blog posts about your organization will be weighed by the position and credibility of the tweeter or blogger.

Transparency and Constituent Data Breaches

Two years ago, a number of nonprofits were faced with a difficult decision when a popular hosted eCRM service was compromised, and account information for donors was stolen by one or more hackers. Thankfully, this wasn’t credit card information, but it included login details, and I’m sure that we all know people who use the same password for their online giving as they do for other web sites, such as, perhaps, their online banking. This was a serious breach, and there was a certain amount of disclosure from the nonprofits to their constituents that was mandated.

Strident voices in the community called for full disclosure, urging affected nonprofits to put a warning on the home page of their web sites. Many of the organizations settled for alerting every donor that was potentially compromised via phone and/or email, determining that their unaffected constituents might not be clear on how the breach happened or what the risks were, and would simply take the home page warning as a suggestion to not donate online.

To frame this as a black and white issue, demanding that it be treated with no discretion, is extreme. The seriousness and threat that resulted from this particular breach was not a simple thing to quantify or explain. So it boils down to a number of factors:

  • Scope: If all or most of your supporters are at risk, or the number at risk is in the six figure range, it’s probably more responsible, in the name of protecting them, to broadcast the alert widely. If, as in the case above, those impacted are the ones donate online, then that’s probably not close to the amount that would fully warrant broad disclosure, as even the strident voice pointed out.
  • Risk: Will your constituents understand that the notice is informational, and not an admission of guilt or irresponsibility in handling their sensitive data? Alternatively, if this becomes public knowledge, would your lack of transparency look like an admission of guilt? You should be comfortable with your decision, and able to explain it.
  • Consistency: Some nonprofits have more responsibility to model transparency than others. If the Sunlight Foundation was one of the organizations impacted, it’s a no-brainer. Salvation Army? Transparency isn’t referenced on their “Positions” page.
  • Courtesy: Some constituencies are more savvy about this type of thing than others. If the affected constituents have all been notified, and they represent a small portion of the donor base, it’s questionable whether scaring your supporters in the name of openness is really warranted.

Since alternate exposure, in the press or community, is likely to occur, the priority is to have a consistent policy about how and when you broadcast information about security breaches. Denying that something has had happened in any public forum would be irresponsible and unethical, and most likely come right back at you. Not being able to explain why you chose not to publicize it on your website could also have damaging consequences. Erring on the side of alerting and protecting those impacted by security breaches is the better way to go, but the final choice has to weigh in all of the risks and factors.

Conclusion

All of my examples assume you’re doing the right things. You have justifiable reasons for doing things that might be considered provocative. Your overall efforts are mission-focused. And the reasons for privacy regarding certain information are that it needs to be private (client medical records, for example); it supports your mission-based objectives by being private, and/or it respects the privacy of people close to the information.

No matter how well we protect our data, the walls are much thinner than they used to be. Any unfortunate tweet can “go viral”. We can’t put a lock on our information that will truly secure it. So it’s important to manage communications with an understanding that information will be shared. Protect your overall reputation, and don’t sweat the minor slips that reveal, mostly, that you’re not a paragon of perfection, maybe, but a group of human beings, struggling to make a difference under the usual conditions.

SaaS and Security

This post was originally published on the Idealware Blog in May of 2009.

My esteemed colleague Michelle Murrain lobbed the first volley in our debate over whether tis safer to host all of your data at home, or to trust a third party with it. The debate is focused on Software as a Service (SaaS) as a computing option for small to mid-sized nonprofits with little internal IT expertise. This would be a lot more fun if Michelle was dead-on against the SaaS concept, and if I was telling you to damn the torpedos and go full speed ahead with it. But we’re all about the rational analysis here at Idealware, so, while I’m a SaaS advocate and Michelle urges caution, there’s plenty of give and take on both sides.

Michelle makes a lot of sound points, focusing on the very apt one that a lack of organizational technology expertise will be just as risky a thing in an outsourced arrangement as it is in-house. But I only partially agree.

  • Security: Certainly, bad security procedures are bad security procedures, and that risk exists in both environments. But beyond the things that could be addressed by IT-informed policies, there are also the security precautions that require money to invest in and staff to support, like encryption and firewalls. I reject the argument that the data is safer on an unsecured, internal network than it is in a properly secured, PCI-Compliant, hosted environment. You’re not just paying the SaaS provider to manage the servers that you manage today; you’re paying them to do a more thorough and compliant job at it.
  • Backups: Many tiny nonprofits don’t have reliable backup in place; a suitable SaaS provider will have that covered. While you will also want them to provide local backups (either via scheduled download or regular shipment of DVDs), even without that, it’s conceivable that the hosted situation will provide you with better redundancy than your own efforts.
  • Data Access: Finally, data access is key, but I’ve seen many cases where vendor licensing restricts users from working with their own data on a locally installed server. Being able to access your data, report on it, back it up, and, if you choose, globally update it is the ground floor that you negotiate to for any data management system, be it hosted or not. To counter Michelle, resource-strapped orgs might be better off with a hosted system that comes with data management services than an internal one that requires advanced SQL training to work with.

Where we might really not see eye to eye on this is in our perception of how ‘at risk” these small nonprofits are, and I look at things like increasing governmental and industry regulation of internal security around credit cards and donor information as a time bomb for many small orgs, who might soon find themselves facing exorbitant fines or criminal charges for being your typical nonprofit, managing their infrastructure on a shoestring and, by necessity, skimping on some of the best practices. It’s simple – the more we invest in administration, the worse we look in our Guidestar ratings. In that scenario, outsourcing this expertise is a more affordable and reliable option than trying to staff to it, or, worse, hope we don’t get caught.

But one point of Michelle’s that I absolutely agree with is that IT-starved nonprofits lack the internal expertise to properly assess hosting environments. In any outsourcing arrangement, the vendors have to be thoroughly vetted, with complete assurances about your access to data, their ability to protect it, and their plans for your data if their business goes under. Just as you wouldn’t delegate your credit card processing needs to some kid in a basement, you can trust your critical systems to some startup with no assurance of next year’s funding. So this is where you make the right investments, avail yourself of the type of information that Idealware provides, and hire a consultant.

To me, there are two types of risk: The type you take, and the type you foster by assuming that your current practices will suffice in an ever-changing world (more on this next week). Make no mistake, SaaS is a risky enterprise. But managing your own technology without tech-savvy staff on hand is something worse than taking a risk – it’s setting yourself up for disaster. While there are numerous ways to mitigate that, none of them are dollar or risk free, and SaaS could prove to be a real bang for your buck alternative, in the right circumstances.