Tag Archives: beth kanter

Security and Privacy in a Web 2.0 World

This post originally appeared on the Idealware Blog in November of 2009.
A Tweet from Beth

Yes, we do Twitter requests!

To break down that tweet a bit, kanter is the well-known Beth Kanter of Beth’s blog. pearlbear is former Idealware blogger and current contributor Michelle Murrain, and Beth asked us, in the referenced blog post, to dive a bit into internet security and how it contrasts with internet privacy concerns. Michelle’s response, offers excellent and concise definitions of security and privacy as they apply to the web, and then sums up with a key distinction: security is a set of tools for protecting systems and information. The sensitivity of that data (and need for privacy) is a matter of policy. So the next question is, once you have your security systems and policies in place, what happens when the the policies are breached?

Craft a Policy that Minimizes Violations

Social media is casual media. The Web 2.0 approach is to present a true face to the world, one that interacts with the public and allows for individuals, with individual tastes and opinions, to share organizational information online. So a strict rule book and mandated wording for your talking points are not going to work.

Your online constituents expect your staff to have a shared understanding of your organization’s mission and objectives. But they also expect the CEO, the Marketing Assistant and the volunteer Receptionists to have real names (and real pictures on their profiles); their own online voices; and interests they share that go beyond the corporate script. It’s not a matter of venturing too far out of the water — in fact, that could be as much of a problem as staying too close to the prepared scripts. But the tone that works is the one of a human being sharing their commitment and excitement about the work that they (and you) do.

Expect that the message will reflect individual interpretations and biases. Manage the messaging to the key points, and make clear the areas that shouldn’t be discussed in public. Monitor the discussion, and proactively mentor (as opposed to chastising) staff who stray in ways that violate the policy, or seem capable of doing so.

The Case for Transparency

Transparency assumes that multiple voices are being heard; that honest opinions are being shared, and that organizations aren’t sweeping the negative issues under the virtual rug. Admittedly, it’s a scary idea that your staff, your constituents, and your clients should all be free to represent you. The best practice of corporate communications, for many years, was to run all messaging through Marketing/Communications experts and tightly control what was said. I see two big reasons for doing otherwise:

  • We no longer have a controlled media.

Controlled messaging worked when opening your own TV or Radio Station was prohibitively expensive. Today, YouTube, Yelp and Video Blogs are TV Stations. Twitter and Facebook Status are radio stations. The investment cost to speak your mind to a public audience has just about vanished.

  • We make more mistakes by under-communicating than we do by over-communicating.

Is the importance of hiding something worth the cost of looking like you have something to hide? At the peak of the dot com boom, I hired someone onto my staff at about $10k more (annually) than current staff in similar roles were making. An HR clerk accidentally sent the offer letter to my entire staff. The fallout was that I had meaningful talks about compensation with each of my staff; made them aware that they were getting market (or better) in a rapidly changing market, and that we were keeping pace on anniversary dates. Prior to the breach, a few of my staff had been wrongly convinced that they were underpaid in their positions. The incident only strengthened the trust between us.

The Good, the Bad, and the Messenger

Your blog should allow comments, and — short of spam, personal attacks and incivility — shouldn’t be censored. A few years ago, a former employee of my (former) org managed to register the .com extension of our domain name and put up a web site criticizing us. While the site didn’t get a lot of hits, he did manage to find other departed staff with axes to grind, and his online forum was about a 50-50 mix of people trashing us and others defending. After about a month, he went in and deleted the 50% of forum messages that spoke up for our organization, leaving the now one-sided, negative conversation intact. And that was the end of his forum; nobody ever posted there again.

There were some interesting lessons here for us. He had a lot of inside knowledge that he shared, with no concern or allegiance to our policy. And he was motivated and well-resourced to use the web to attack us, But, in the end, we didn’t see any negative impact on our organization. The truth was, it was easy to separate his bias from his “inside scoops”, and hard to paint us in a very negative light, because the skeletons that he let out of our closet were a lot like anybody else’s.

What this proves is that message delivery accounts for the messenger. Good and bad tweets and blog posts about your organization will be weighed by the position and credibility of the tweeter or blogger.

Transparency and Constituent Data Breaches

Two years ago, a number of nonprofits were faced with a difficult decision when a popular hosted eCRM service was compromised, and account information for donors was stolen by one or more hackers. Thankfully, this wasn’t credit card information, but it included login details, and I’m sure that we all know people who use the same password for their online giving as they do for other web sites, such as, perhaps, their online banking. This was a serious breach, and there was a certain amount of disclosure from the nonprofits to their constituents that was mandated.

Strident voices in the community called for full disclosure, urging affected nonprofits to put a warning on the home page of their web sites. Many of the organizations settled for alerting every donor that was potentially compromised via phone and/or email, determining that their unaffected constituents might not be clear on how the breach happened or what the risks were, and would simply take the home page warning as a suggestion to not donate online.

To frame this as a black and white issue, demanding that it be treated with no discretion, is extreme. The seriousness and threat that resulted from this particular breach was not a simple thing to quantify or explain. So it boils down to a number of factors:

  • Scope: If all or most of your supporters are at risk, or the number at risk is in the six figure range, it’s probably more responsible, in the name of protecting them, to broadcast the alert widely. If, as in the case above, those impacted are the ones donate online, then that’s probably not close to the amount that would fully warrant broad disclosure, as even the strident voice pointed out.
  • Risk: Will your constituents understand that the notice is informational, and not an admission of guilt or irresponsibility in handling their sensitive data? Alternatively, if this becomes public knowledge, would your lack of transparency look like an admission of guilt? You should be comfortable with your decision, and able to explain it.
  • Consistency: Some nonprofits have more responsibility to model transparency than others. If the Sunlight Foundation was one of the organizations impacted, it’s a no-brainer. Salvation Army? Transparency isn’t referenced on their “Positions” page.
  • Courtesy: Some constituencies are more savvy about this type of thing than others. If the affected constituents have all been notified, and they represent a small portion of the donor base, it’s questionable whether scaring your supporters in the name of openness is really warranted.

Since alternate exposure, in the press or community, is likely to occur, the priority is to have a consistent policy about how and when you broadcast information about security breaches. Denying that something has had happened in any public forum would be irresponsible and unethical, and most likely come right back at you. Not being able to explain why you chose not to publicize it on your website could also have damaging consequences. Erring on the side of alerting and protecting those impacted by security breaches is the better way to go, but the final choice has to weigh in all of the risks and factors.

Conclusion

All of my examples assume you’re doing the right things. You have justifiable reasons for doing things that might be considered provocative. Your overall efforts are mission-focused. And the reasons for privacy regarding certain information are that it needs to be private (client medical records, for example); it supports your mission-based objectives by being private, and/or it respects the privacy of people close to the information.

No matter how well we protect our data, the walls are much thinner than they used to be. Any unfortunate tweet can “go viral”. We can’t put a lock on our information that will truly secure it. So it’s important to manage communications with an understanding that information will be shared. Protect your overall reputation, and don’t sweat the minor slips that reveal, mostly, that you’re not a paragon of perfection, maybe, but a group of human beings, struggling to make a difference under the usual conditions.

The ROI on Flexibility

This post originally appeared on the Idealware Blog in April of 2009.

Non Profit social media maven Beth Kanter blogged recently about starting up a residency at a large foundation, and finding herself in a stark transition from a consultant’s home office to a corporate network. This sounds like a great opportunity for corporate culture shock. When your job is to download many of the latest tools and try new things on the web that might inform your strategy or make a good topic for your blog, encountering locked-down desktops and web filtering can be, well, annoying is probably way to soft a word. Beth reports that the IT Team was ready for her, guessing that they’d be installing at least 72 things for her during her nine month stay. My question to Beth was, “That’s great – but are they just as accommodating to their full-time staff, or is flexibility reserved for visiting nptech dignitaries?”

The typical corporate desktop computer is restricted by group policies and filtering software. Management, along with the techs, justify these restrictions in all sorts of ways:

  • Standardized systems are easier, more cost-effective to manage.
  • Restricted systems are more secure.
  • Web filtering maximizes available bandwidth.

This is all correct. In fact, without standardization, automation, group policies that control what can and can’t be done on a PC, and some protection from malicious web sites, any company with 15 to 20 desktops or more is really unmanageable. The question is, why do so many companies take this ability to manage by controlling functionality to extremes?

Because, in many/most cases, the restrictions put in place are far broader than is necessary to keep things manageable. Web filtering not only blocks pornography and spyware, but continues on to sports, entertainment, politics, and social networking. Group policies restrict users from changing their desktop colors or setting the system time. And the end result of using the standardization tools to intensively control computer usage results, most often, in IT working just as hard or harder to manage the exceptions to the rules (like Beth’s 72, above) than they would dealing with the tasks that the automation simplifies in the first place.

Restricting computer/internet use is driven by a management and/or IT assumption that the diverse, dynamic nature of computing is either a distraction or a problem. The opportunity to try something new is an opportunity to waste time or resources. By locking down the web; eliminating a user’s ability to install applications or even access settings, PC’s can be engineered back down to the limited functionality of the office equipment that they replaced, such as typewriters, calculators and mimeograph machines.

In this environment, technology is much more of a controlled, predictable tool. But what’s the cost of this predictability?

  • Technology is not fully appreciated, and computer literacy is limited in an environment where users can’t experiment.
  • Strategic opportunities that arise on the web are not noticed and factored into planning.
  • IT is placed in the role of organizational nanny, responsible for curtailing computer use, as opposed to enabling it.

Cash and resource-strapped, mission-focused organizations only need look around to see the strategic opportunities inherent in the web. There are an astounding number of free, innovative tools for activism and research. Opportunities to monitor discussion of your organization and issues, and meaningfully engage your constituents are huge. And all of this is fairly useless if your staff are locked out of the web and discouraged from exploring it. Pioneers like Beth Kanter understand this. They seek out the new things and ask, how can this tool, this web site, this online community serve our sector’s goals to ease suffering and promote justice? More specifically, can you end hunger in a community with a widget? Or bring water to a parched village via Twitter? If our computing environment is geared to stifle innovation at the cost of security, are we truly supporting technology?

As the lead technologist at my organization, I want to be an enabler. I want to see our attorneys use the power of the web to balance the scales when we go to court against far better resourced corporate and government counsel. In this era of internet Davids taking down Goliaths from the RIAA the the mainstream media, I don’t want my co-workers to miss out on any opportunities to be effective. So I need the flexibility and perspective to understand that security is not something that you maintain with a really big mallet, lest you stamp out innovation and strategy along with the latest malware. And, frankly, cleaning a case of the conflickr¬†worm off of the desktop of an attorney that just took down a set of high-paid corporate attorneys with data grabbed from some innovative mapping application¬†that our web-filtering software would have mistakenly identified as a gaming site is well worth the effort.

Flexibility has it’s own Return on Investment (ROI), particularly at nonprofits, where we generally have a lot more innovative thinking and opportunistic attitude than available budget. IT has to be an enabler, and every nonprofit CIO or IT Director has to understand that security comes at a cost, and that cost could be the mission-effectiveness of our organizations.