Tag Archives: disruption

The Future Of Technology

Jean_Dodal_Tarot_trump_01…is the name of the track that I am co-facilitating at NTEN’s Leading Change Summit. I’m a late addition, there to support Tracy Kronzak and Tanya Tarr. Unlike the popular Nonprofit Technology Conference, LCS (not to be confused with LSC, as the company I work for is commonly called, or LSC, my wife’s initials) is a smaller, more focused affair with three tracks: Impact Leadership, Digital Strategy, and The Future of Technology. The expectation is that attendees will pick a track and stick with it.  Nine hours of interactive sessions on each topic will be followed by a day spent at the Idea Accelerator, a workshop designed to jump-start each attendee’s work in their areas. I’m flattered that they asked me to help out, and excited about what we can do to help resource and energize emerging nptech leaders at this event.

The future of technology is also something that I think about often (hey, I’m paid to!) Both in terms of what’s coming, and how we (LSC and the nonprofit sector) are going to adapt to it. Here are some of the ideas that I’m bringing to LCS this fall:

  • At a tactical level, no surprise, the future is in the cloud; it’s mobile; it’s software as a service and apps, not server rooms and applications.
  • The current gap between enterprise and personal software is going to go away, and “bring your own app” is going to be the computing norm.
  • Software evaluation will look more at interoperability, mobile, and user interface than advanced functionality.  In a world where staff are more independent in their software use, with less standardization, usability will trump sophistication.  We’ll expect less of our software, but we’ll expect to use it without any training.
  • We’ll expect the same access to information and ability to work with it from every location and every device. There will still be desktop computers, and they’ll have more sophisticated software, but there will be less people using them.
  • A big step will be coming within a year or two, when mobile manufacturers solve the input problem. Today, it’s difficult to do serious content creation on mobile devices, due primarily to the clumsiness of the keyboards and, also, the small screens. They will come up with something creative to address this.
  • IT staffing requirements will change.  And they’ll change dramatically.  But here’s what won’t happen: the percentage of technology labor won’t be reduced.  The type of work will change, and the distribution of tech responsibility will be spread out, but there will still be a high demand for technology expertise.
  • The lines between individual networks will fade. We’ll do business on shared platforms like Salesforce, Box, and {insert your favorite social media platform here}.  Sharing content with external partners and constituents will be far simpler. One network, pervasive computing, no more firewalls (well, not literally — security is still a huge thing that needs to be managed).

This all sounds good! Less IT controlling what you can and can’t do. Consumerization demystifying technology and making it more usable.  No more need to toss around acronyms like “VPN.”

Of course, long after this future arrives, many nonprofits will still be doing things the old-fashioned ways.  Adapting to and adopting these new technologies will require some changes in our organizational cultures.  If technology is going to become less of a specialty and more of a commodity, then technical competency and comfort using new tools need to be common attributes of every employee. Here are the stereotypes that must go away today:

  1. The technophobic executive. It is no longer allowable to say you are qualified to lead an organization or a department if you aren’t comfortable thinking about how technology supports your work.  It disqualifies you.
  2. The control freak techie.  They will fight the adoption of consumer technology with tooth and claw, and use the potential security risks to justify their approach. Well, yes, security is a real concern.  But the risk of data breaches has to be balanced against the lost business opportunities we face when we restrict all technology innovation. I blogged about that here.
  3. The paper-pushing staffer. All staff should have basic data management skills; enough to use a spreadsheet to analyze information and understand when the spreadsheet won’t work as well as a database would.
  4. Silos, big and small. The key benefit of our tech future is the ability to collaborate, both inside our company walls and out. So data needs to be public by default; secured only when necessary.  Policy and planning has to cross department lines.
  5. The “technology as savior” trope. Technology can’t solve your problems.  You can solve your problems, and technology can facilitate your solution. It needs to be understood that big technology implementations have to be preceded by business process analysis.  Otherwise, you’re simply automating bad or outdated processes.

I’m looking forward to the future, and I can’t wait to dive into these ideas and more about how we use tech to enhance our operations, collaborate with our community and constituents, and change the world for the better.   Does this all sound right to you? What have I got wrong, and what have I missed?

The Nonprofit Management Gap

I owe someone an apology. Last night, a nice woman that I’ve never met sent me an email relaying (not proposing) an idea that others had pitched. Colleagues of mine who serve in communications roles in the nonprofit sector were suggesting a talk on “Why CIOs/CTOs should be transitioned into Chief Digital and Data Officers”.  And, man, did that line get me going.

Now, I’m with them on a few points: Organizations that rely on public opinion and support to accomplish their mission, which includes the majority of nonprofits, need to hire marketers that get technology, particularly the web.  And those people need to be integrated into upper management, not reporting to the Development VP or COO.  It’s the exact same case I make for the lead technologist role.

Let’s look at a few of these acronyms and titles:

COO – Chief Operating Officer.  In most NPOs that have one, this role oversees operations while the CEO oversees strategy and advances the mission with the public.

CIO – Chief Information Officer.  CIOs are highly placed technologists whose core job is to align technology to mission-effectiveness.  In most cases, because we can’t afford large staffs, CIOs also manage the IT Department, but their main value lies in the business planning and collaboration that they foster in order to integrate technology.

Some companies hire CTOs: Chief Technology Officers.  This is in product-focused environments where, again, you need a highly placed technologist who can manage the communication and expectations between the product experts and the technical staff designing and developing the products for them.

IT Director – An IT Director is a middle manager who oversees technology planning, budgeting, staff and projects. In (rare) cases, they report to a CIO or CTO.  In the nonprofit world, they are often the lead technologists, but they report up to a COO or VP Admin, not the CEO.

CMO – Chief Marketing Officer.  This is a new role which, similar to CIO, elevates the person charged with constituent engagement to the executive level.

This is how many nonprofit CEOs think about technology:

Public Domain Image

Say you, at home, have a leaky faucet.  It’s wasting water and the drip is driving you crazy.  You can’t just tear out the sink — you need that.  So you hire a plumber.  Or, if you have the opportunity, you get your accidental te– I mean, acne-dented teenager to read up on it and fix the leak for you.  So now you have a plumber, and your sink is no longer dripping. Great!
Now you want to remodel your house.  You want to move the master bath downstairs and the kitchen to the east side.  That’s going to require planning. Risk assessment. Structural engineering. You could hire a contractor — someone with the knowledge and the skill to not only oversee plumbing changes, but project management, vendor coordination, and, most important, needs assessment. Someone who knows how to ask you what you want and then coordinate the effort so that that’s what you get.  So, what should you do?
Have the plumber do it.  He did a good job on the leak, right?
Every job that I’ve had since 1990 has, at the onset, been to fix the damage that a plumber did while they were charged with building a house.  Sometimes I’ve worked for people who got it, saw that they needed my communication skills as much or more than they needed my technical expertise.  At those jobs, I was on a peer level with the other department heads, not one lower.  Other times, they expected me to be just like the plumber that I replaced. They were surprised and annoyed when I tried to tell them that what they really needed was to work with me, not delegate to me.  At those jobs, I was mostly a highly-functional pain in the ass.

Some of those jobs got bad, but here’s how bad it can get when management just doesn’t get technology.

So, back to my rant, here’s my question: why would we increase the strategic role of marketing at the expense of strategic technology integration?  Is that a conscious desire to move just as far backward as we’re moving forward?  Is this suggestion out of a frustration that people who manage technology aren’t exclusively supporting communications in our resource-strapped environments? In any case, it’s a sad day for the sector if we’re going to pitch turf wars instead of overall competence.  There is no question: we need high level technologists looking after our infrastructure, data strategy, and constituent engagement. But we can’t address critical needs by crippling other areas.

Hearts and Mobiles

This post was originally published on the Idealware Blog in March of 2010.

Are Microsoft and Apple using the mobile web to dictate how we use technology? And, if so, what does that mean for us?

Last week, John Herlihy, Google’s Chief of Sales, made a bold prediction:

“In three years time, desktops will be irrelevant.”

Herlihy’s argument was based on research indicating that, in Japan, more people now use smartphones for internet entertainment and research than desktops. It’s hard to dispute that the long predicted “year of the smartphone” has arrived in the U.S., with iPhones, Blackberries and Android devices hitting record sales figures, and Apple’s “magical” iPad leading a slue of mini-computing devices out of the gate.

We’ve noted Apple’s belligerence in allowing applications on their mobile platform that don’t pass a fairly restrictive and controversial screening process. It’s disturbing that big corporations like Playboy get a pass from a broad “no nudity” policy on iPhone apps that a swimwear store doesn’t. But it’s more disturbing that competing technology providers, like Google and Opera, can’t get their call routing and web browsing applications approved either. It’s Apple’s world, and iPhone owners have to live in it (or play dodgeball with each upgrade on their jailbroken devices). And now Microsoft has announced their intention to play the same game. Windows Mobile 7, their “from the ground up” rewrite of their mobile OS, will have an app store, and you will not be able to install applications from anywhere else.

iPhone adherents tell me that the consistency and stability of Apple’s tightly-controlled platform is better than the potentially messy open platforms. You might get a virus. Or you might see nudity. And your experience will vary dramatically from phone to phone, as the telcos modify the user interface and sub in their own applications for the standard ones. There are plenty of industry experts defending Apple’s policies.

What they don’t crow about is the fact that, using the Apple and Microsoft devices, you are largely locked into DRM-only options for multimedia at their stores for buying digital content. They will make most of their smartphone profits on the media that they sell you (music, movies, ebooks), and they tightly control the the information and data flow, as well as the devices you play their content on. How comfortable are you with letting the major software manufacturers control not only what software you can install on your systems, but what kind of media is available to them, as well?

The latest reports on the iPad are that, in addition to not supporting Adobe’s popular Flash format, Google’s Picasa image management software won’t work as well. If you keep your photos with Google, you’d better quickly get them to an Apple-friendly storage service like Apple’s MobileMe or Flickr, and get ready to use iPhoto to manage them.

If your organization, has invested heavily in a vendor or product that Apple and/or Microsoft are crossing off their list, you face a dilemma. Can you just ignore the people using their popular products? Should you immediately redesign your Flash-heavy website with something that you hope Apple will continue to support? If your cause is controversial, are you going to be locked out of a strategic mobile market for advocacy and development because the nature of your work can’t get past the company censors?

I’m nervous to see a major computing trend like mobile computing arise with such disregard for the open nature of the internet that the companies releasing these devices pioneered and grew up in. And I’m concerned that there will be repercussions to moving to a model where single vendors are competing to be one stop hardware, software and content providers. It’s not likely that Apple, Microsoft, Amazon, Google or anyone else is really qualified to determine what each of us want and don’t want to read, watch and listen to. And it’s frightening to think that the future of our media consumption might be tied to their idiosyncratic and/or profit-driven choices.

Get Ready For A Sea Change In Nonprofit Assessment Metrics

This post was originally published on the Idealware Blog in December of 2009.

watchdogs.png

Last week, GuideStar, Charity Navigator, and three other nonprofit assessment and reporting organizations made a huge announcement: the metrics that they track are about to change.  Instead of scoring organizations on an “overhead bad!” scale, they will scrap the traditional metrics and replace them with ones that measure an organization’s effectiveness.

  • The new metrics will assess:
  • Financial health and sustainability;
  • Accountability, governance and transparency; and
  • Outcomes.

This is very good news. That overhead metric has hamstrung serious efforts to do bold things and have higher impact. An assessment that is based solely on annualized budgetary efficiency precludes many options to make long-term investments in major strategies.  For most nonprofits, taking a year to staff up and prepare for a major initiative would generate a poor Charity Navigator score. A poor score that is prominently displayed to potential donors.

Assuming that these new metrics will be more tolerant of varying operational approaches and philosophies, justified by the outcomes, this will give organizations a chance to be recognized for their work, as opposed to their cost-cutting talents.  But it puts a burden on those same organizations to effectively represent that work.  I’ve blogged before (and will blog again) on our need to improve our outcome reporting and benchmark with our peers.  Now, there’s a very real danger that neglecting to represent your success stories with proper data will threaten your ability to muster financial support.  You don’t want to be great at what you do, but have no way to show it.

More to the point, the metrics that value social organizational effectiveness need to be developed by a broad community, not a small group or segment of that community. The move by Charity Navigator and their peers is bold, but it’s also complicated.  Nonprofit effectiveness is a subjective thing. When I worked for a workforce development agency, we had big questions about whether our mission was served by placing a client in a job, or if that wasn’t an outcome as much as an output, and the real metric was tied to the individual’s long-term sustainability and recovery from the conditions that had put them in poverty.

Certainly, a donor, a watchdog, a funder a, nonprofit executive and a nonprofit client are all going to value the work of a nonprofit differently. Whose interests will be represented in these valuations?

So here’s what’s clear to me:

– Developing standardized metrics, with broad input from the entire community, will benefit everyone.

– Determining what those metrics are and should be will require improvements in data management and reporting systems. It’s a bit of a chicken and egg problem, as collecting the data wis a precedent to determining how to assess it, but standardizing the data will assist in developing the data systems.

– We have to share our outcomes and compare them in order to develop actual standards.  And there are real opportunities available to us if we do compare our methodologies and results.

This isn’t easy. This will require that NPO’s who have have never had the wherewith-all to invest in technology systems to assess performance do so.  But, I maintain, if the world is going to start rating your effectiveness on more than the 990, that’s a threat that you need to turn into an opportunity.  You can’t afford not to.

And I look to my nptech community, including Idealware, NTEN, Techsoup, Aspiration and many others — the associations, formal, informal, incorporated or not, who advocate for and support technology in the nonprofit sector — to lead this effort.  We have the data systems expertise and the aligned missions to lead the project of defining shared outcome metrics.  We’re looking into having initial sessions on this topic at the 2010 Nonprofit Technology Conference.

As the world starts holding nonprofits up to higher standards, we need a common language that describes those standards.  It hasn’t been written yet.  Without it, we’ll escape the limited, Form 990 assessments to something that might equally fail to reflect our best efforts and outcomes.

Security and Privacy in a Web 2.0 World

This post originally appeared on the Idealware Blog in November of 2009.
A Tweet from Beth

Yes, we do Twitter requests!

To break down that tweet a bit, kanter is the well-known Beth Kanter of Beth’s blog. pearlbear is former Idealware blogger and current contributor Michelle Murrain, and Beth asked us, in the referenced blog post, to dive a bit into internet security and how it contrasts with internet privacy concerns. Michelle’s response, offers excellent and concise definitions of security and privacy as they apply to the web, and then sums up with a key distinction: security is a set of tools for protecting systems and information. The sensitivity of that data (and need for privacy) is a matter of policy. So the next question is, once you have your security systems and policies in place, what happens when the the policies are breached?

Craft a Policy that Minimizes Violations

Social media is casual media. The Web 2.0 approach is to present a true face to the world, one that interacts with the public and allows for individuals, with individual tastes and opinions, to share organizational information online. So a strict rule book and mandated wording for your talking points are not going to work.

Your online constituents expect your staff to have a shared understanding of your organization’s mission and objectives. But they also expect the CEO, the Marketing Assistant and the volunteer Receptionists to have real names (and real pictures on their profiles); their own online voices; and interests they share that go beyond the corporate script. It’s not a matter of venturing too far out of the water — in fact, that could be as much of a problem as staying too close to the prepared scripts. But the tone that works is the one of a human being sharing their commitment and excitement about the work that they (and you) do.

Expect that the message will reflect individual interpretations and biases. Manage the messaging to the key points, and make clear the areas that shouldn’t be discussed in public. Monitor the discussion, and proactively mentor (as opposed to chastising) staff who stray in ways that violate the policy, or seem capable of doing so.

The Case for Transparency

Transparency assumes that multiple voices are being heard; that honest opinions are being shared, and that organizations aren’t sweeping the negative issues under the virtual rug. Admittedly, it’s a scary idea that your staff, your constituents, and your clients should all be free to represent you. The best practice of corporate communications, for many years, was to run all messaging through Marketing/Communications experts and tightly control what was said. I see two big reasons for doing otherwise:

  • We no longer have a controlled media.

Controlled messaging worked when opening your own TV or Radio Station was prohibitively expensive. Today, YouTube, Yelp and Video Blogs are TV Stations. Twitter and Facebook Status are radio stations. The investment cost to speak your mind to a public audience has just about vanished.

  • We make more mistakes by under-communicating than we do by over-communicating.

Is the importance of hiding something worth the cost of looking like you have something to hide? At the peak of the dot com boom, I hired someone onto my staff at about $10k more (annually) than current staff in similar roles were making. An HR clerk accidentally sent the offer letter to my entire staff. The fallout was that I had meaningful talks about compensation with each of my staff; made them aware that they were getting market (or better) in a rapidly changing market, and that we were keeping pace on anniversary dates. Prior to the breach, a few of my staff had been wrongly convinced that they were underpaid in their positions. The incident only strengthened the trust between us.

The Good, the Bad, and the Messenger

Your blog should allow comments, and — short of spam, personal attacks and incivility — shouldn’t be censored. A few years ago, a former employee of my (former) org managed to register the .com extension of our domain name and put up a web site criticizing us. While the site didn’t get a lot of hits, he did manage to find other departed staff with axes to grind, and his online forum was about a 50-50 mix of people trashing us and others defending. After about a month, he went in and deleted the 50% of forum messages that spoke up for our organization, leaving the now one-sided, negative conversation intact. And that was the end of his forum; nobody ever posted there again.

There were some interesting lessons here for us. He had a lot of inside knowledge that he shared, with no concern or allegiance to our policy. And he was motivated and well-resourced to use the web to attack us, But, in the end, we didn’t see any negative impact on our organization. The truth was, it was easy to separate his bias from his “inside scoops”, and hard to paint us in a very negative light, because the skeletons that he let out of our closet were a lot like anybody else’s.

What this proves is that message delivery accounts for the messenger. Good and bad tweets and blog posts about your organization will be weighed by the position and credibility of the tweeter or blogger.

Transparency and Constituent Data Breaches

Two years ago, a number of nonprofits were faced with a difficult decision when a popular hosted eCRM service was compromised, and account information for donors was stolen by one or more hackers. Thankfully, this wasn’t credit card information, but it included login details, and I’m sure that we all know people who use the same password for their online giving as they do for other web sites, such as, perhaps, their online banking. This was a serious breach, and there was a certain amount of disclosure from the nonprofits to their constituents that was mandated.

Strident voices in the community called for full disclosure, urging affected nonprofits to put a warning on the home page of their web sites. Many of the organizations settled for alerting every donor that was potentially compromised via phone and/or email, determining that their unaffected constituents might not be clear on how the breach happened or what the risks were, and would simply take the home page warning as a suggestion to not donate online.

To frame this as a black and white issue, demanding that it be treated with no discretion, is extreme. The seriousness and threat that resulted from this particular breach was not a simple thing to quantify or explain. So it boils down to a number of factors:

  • Scope: If all or most of your supporters are at risk, or the number at risk is in the six figure range, it’s probably more responsible, in the name of protecting them, to broadcast the alert widely. If, as in the case above, those impacted are the ones donate online, then that’s probably not close to the amount that would fully warrant broad disclosure, as even the strident voice pointed out.
  • Risk: Will your constituents understand that the notice is informational, and not an admission of guilt or irresponsibility in handling their sensitive data? Alternatively, if this becomes public knowledge, would your lack of transparency look like an admission of guilt? You should be comfortable with your decision, and able to explain it.
  • Consistency: Some nonprofits have more responsibility to model transparency than others. If the Sunlight Foundation was one of the organizations impacted, it’s a no-brainer. Salvation Army? Transparency isn’t referenced on their “Positions” page.
  • Courtesy: Some constituencies are more savvy about this type of thing than others. If the affected constituents have all been notified, and they represent a small portion of the donor base, it’s questionable whether scaring your supporters in the name of openness is really warranted.

Since alternate exposure, in the press or community, is likely to occur, the priority is to have a consistent policy about how and when you broadcast information about security breaches. Denying that something has had happened in any public forum would be irresponsible and unethical, and most likely come right back at you. Not being able to explain why you chose not to publicize it on your website could also have damaging consequences. Erring on the side of alerting and protecting those impacted by security breaches is the better way to go, but the final choice has to weigh in all of the risks and factors.

Conclusion

All of my examples assume you’re doing the right things. You have justifiable reasons for doing things that might be considered provocative. Your overall efforts are mission-focused. And the reasons for privacy regarding certain information are that it needs to be private (client medical records, for example); it supports your mission-based objectives by being private, and/or it respects the privacy of people close to the information.

No matter how well we protect our data, the walls are much thinner than they used to be. Any unfortunate tweet can “go viral”. We can’t put a lock on our information that will truly secure it. So it’s important to manage communications with an understanding that information will be shared. Protect your overall reputation, and don’t sweat the minor slips that reveal, mostly, that you’re not a paragon of perfection, maybe, but a group of human beings, struggling to make a difference under the usual conditions.

Word or Wiki?

This post was originally published on the Idealware Blog in August of 2009.

An award-winning friend of mine at NTEN referred me to this article, by Jeremy Reimer, suggesting that Word, the ubiquitous Microsoft text manipulation application, has gone the way of the dinosaur.  The “boil it down” quote:

“Word was designed in a different era, for a very specific purpose. We don’t work that way anymore.”

Reimer’s primary reasoning is that Word was originally developed as a tool that prepares text for printing. Since we now do far more sharing online than by paper, formatting is less important. He also points out that Word files are unwieldy in size, due to the need to support so many advanced but not widely used features. He correctly points out that wikis save every edit, allowing for easy recovery and collaboration. Word’s difficult to read and use Track Changes feature is the closest equivalent

Now, I might have a reputation here as a Microsoft basher, but, the truth is, Word holds a treasured spot on my Mac’s Dock. Attempts to unseat it by Apple’s Pages, Google Docs and Open Office have been short-lived and fruitless. But Reimer’s absolutely right — I use Word far more for compatibility’s sake than the feature set.  There are times – particularly when I’m working on an article with an editor – that the granular Track Changes readout fits the bill better than a wiki’s revision history, because I’m interested in seeing every small grammatical correction.  And there are other times when the templates and automation bring specific convenience to a task, such as when I’m doing a formal memo or printing letterhead at work.  But, for the bulk of writing that I do now, which is intended for sharing on the web, Wikis put Word to shame.

The biggest problem with Word (and its ilk) is that documents can only be jointly edited when that’s facilitated by desktop sharing tools, such as GoToMeeting or ReadyTalk, and now Skype. In most cases, collaboration with Word docs involves multiple copies of the same document being edited concurrently by different people on different computers.  This creates logistical problems when it comes time to merge edits.  It also results in multiple copies of the revised documents on multiple computers and in assorted email inboxes. And, don’t forget that Track Changes use results in larger documents that are more easily corrupted.

A wiki document is just a web page on a server that anyone who is authorized to do so can modify.  Multiple people can edit a wiki concurrently, or they can edit on their own schedules.  The better wiki platforms handle editing conflicts gracefully. Every revision is saved, allowing for an easy review of all changes.  Earlier versions are simple to revert back to.  This doesn’t have to be cloud computing — the wiki can live on a network server, just as most Word documents do.

But it’s more than just the collaborative edge.  Wikis are casual and easy.  Find the page, click “edit”, go to work.  Pagination isn’t an issue. Everything that you can do is usually in a toolbar above the text, and that’s everything that you’d want to do as well.

So when the goal is meeting notes, agendas, documentation, project planning or brainstorming, a wiki might be a far simpler way to meet the need than emailing a Word document around. Word can be dusted off for the printed reports and serious writing projects. In the information age, it appears that the wiki is mightier than the Word.

Next week I’ll follow up with more talk about wikis and how they can meet organizational needs.

The Silo Situation

This post originally appeared on the Idealware Blog in May of 2009.

The technology trend that defines this decade is the movement towards open, pervasive computing. The Internet is at our jobs, in our homes, on our phones, TVs, gaming devices. We email and message everyone from our partners to our clients to our vendors to our kids. For technology managers, the real challenges are less in deploying the systems and software than they are in managing the overlap, be it the security issues all of this openness engenders, or the limitations of our legacy systems that don’t interact well enough. But the toughest integration is not one between software or hardware systems, but, instead, the intersection of strategic computing and organizational culture.

There are two types of silos that I want to discuss: organizational silos, and siloed organizations.

An organizational silo, to be clear, is a group within an organization that acts independently of the rest of the organization, making their own decisions with little or no input from those outside of the group. This is not necessarily a bad thing; there are (although I can’t think of any) cases where giving a group that level of autonomy might serve a useful purpose. But, when the silo acts in an environment where their decisions impact others, they can create long-lived problems and rifts in critical relationships.

We all know that external decisions can disrupt our planning, be it a funders decision to revoke a grant that we anticipated or a legislature dropping funding for a critical program. So it’s all the more frustrating to have the rug pulled out from under us by people who are supposed to be on the same team. If you have an initiative underway to deploy a new email system, and HR lays off the organizational trainer, you’ve been victimized by a silo-ed decision. On the flip side, a fundraiser might undertake a big campaign, unaware that it will collide with a web site redesign that disables the functionality that they need to broadcast their appeal.

Silos thrive in organizations where the leadership is not good at management. Without a strong CEO and leadership team, departmental managers don’t naturally concern themselves with the needs of their peers. The expediency and simplicity of just calling the shots themselves is too appealing, particularly in environments where resources are thin and making overtures to others can result in those resources being gladly taken and never returned. In nonprofits, leaders are often more valued for their relationships and fundraising skills than their business management skills, making our sector more susceptible to this type of problem.

The most damaging result of operating in this environment is that, if you can’t successfully manage the silos in your organization, then you won’t be anything but a silo in the world at large.

We’ve witnessed a number of industries, from entertainment and newspapers to telephones and automobiles, as they allowed their culture to dictate their obsolescence. Instead of adapting their models to the changing needs of their constituents, they’ve clung to older models that aren’t relevant in the digital age, or appropriate for a global economy on a planet threatened by climate change. Since my focus is technology, I pay particular attention to the impacts that technological advancement, and the accompanying change in extra-organizational culture (e.g., the country, our constituents, the world) have on the work my organization does. Just in the past few years, we’ve seen some significant cultural changes that should be impacting nonprofit assumptions about how we use technology:

  • Increased regulation on the handling of data. We’re wrestling with the HIPAA laws governing handling of medical data and PCI standards for financial data. If we have not prioritized firewalls, encryption, and the proper data handling procedures, we’re more and more likely to be out of step with new laws. Even the 990 form we fill out now asks if we have a document retention plan.
  • Our donors are now quite used to telephone auto attendants, email, and the web. How many are now questioning why we use the dollars they donate to us to staff reception, hand write thank you notes, and send out paper newsletters and annual reports?
  • Our funders are seeing more available data on the things that interest them everywhere, so they expect more data from us. The days of putting out the success stories without any numbers to quantify them are over.

Are we making changes in response to these continually evolving expectations? Or are we still struggling with our internal expectations, while the world keeps on turning outside of our walls? We, as a sector, need to learn what these industrial giants refused to, before we, too, are having massive layoffs and closing our doors due to an inability to adapt our strategies to a rapidly evolving cultural climate. And getting there means paying more attention to how we manage our people and operations; showing the leadership to head into this millennia by mastering our internal culture and rolling with the external changes. Look inward, look outward, lead and adapt.

Technology and Risk: Are you Gathering Dust?

This post originally appeared on the Idealware Blog in April of 2009.

Last week I had the thrill of visiting a normally closed-to-the-public Science Building at UC Berkeley, and getting a tour of the lab where they examine interstellar space dust collected from the far side of Mars. NASA spent five or six years, using some of the best minds on the planet and $300,000,000, to develop the probe that went out past Mars to zip (at 400 miles a second) through comet tails and whatever else is out there, gathering dust. The most likely result of the project was that the probe would crash into an asteroid and drift out there until it wasted away. But it didn’t, and the scientists that I met on Saturday are now using these samples to learn things about our universe that are only speculative fiction today.

So, what does NASA know that we don’t about the benefits of taking risks?

In my world of technology management, it seems to be primarily about minimizing risk. We do multiple backups of critical data to different media; we lock down the internet traffic that can go in and out of our network; we build redundancy into all of our servers and systems, and we treat technology as something that will surely fail if we aren’t vigilant in our efforts to secure it. Most of our favorite adages are about avoiding risk: “It it ain’t broke, don’t fix it!” and “Nobody was ever fired for buying IB.. er, MicroSoft.”

On Monday, I’ll be presenting on my chapter of NTEN‘s Book “Managing Technology to Meet Your Mission” at the Nonprofit Technology Conference in San Francisco. My session, and chapter, is about mission-focused technology planning and the art of providing business-class systems on a nonprofit budget. That’s certainly about finding sustainable and dependable options, but my case is that nonprofits, in particular, need to identify the areas where they can send out those probes and gamble a bit. For many nonprofits, technology planning is a matter of figuring out which systems desperately need upgrading and living with a lot of systems and applications that are old and semi-functional. My case is that there’s a different approach: we should spend like a regular business on the critical systems, but be creative and take risks where we can afford to fail a bit, on the chance that we’ll get far more for less money than we would playing it “safe” with inadequate technology. It’s a tough sell, yes, but I frame it in my belief that, when your business is changing the world, your business plan has to be bold and creative. As I mention often, the web is, right now, a platform rife with opportunity. We will miss out on great chances to significantly advance our missions if we just treat it like another threat to our stability.

We need stable systems, and we often struggle with inadequate funding and the technical resources simply to maintain our computer systems. I say that, as hard as that is, we need to invest in exploration. It’s about maximizing potential at the same time as you minimize risk. And its all about the type of dust that you want to gather.

The ROI on Flexibility

This post originally appeared on the Idealware Blog in April of 2009.

Non Profit social media maven Beth Kanter blogged recently about starting up a residency at a large foundation, and finding herself in a stark transition from a consultant’s home office to a corporate network. This sounds like a great opportunity for corporate culture shock. When your job is to download many of the latest tools and try new things on the web that might inform your strategy or make a good topic for your blog, encountering locked-down desktops and web filtering can be, well, annoying is probably way to soft a word. Beth reports that the IT Team was ready for her, guessing that they’d be installing at least 72 things for her during her nine month stay. My question to Beth was, “That’s great – but are they just as accommodating to their full-time staff, or is flexibility reserved for visiting nptech dignitaries?”

The typical corporate desktop computer is restricted by group policies and filtering software. Management, along with the techs, justify these restrictions in all sorts of ways:

  • Standardized systems are easier, more cost-effective to manage.
  • Restricted systems are more secure.
  • Web filtering maximizes available bandwidth.

This is all correct. In fact, without standardization, automation, group policies that control what can and can’t be done on a PC, and some protection from malicious web sites, any company with 15 to 20 desktops or more is really unmanageable. The question is, why do so many companies take this ability to manage by controlling functionality to extremes?

Because, in many/most cases, the restrictions put in place are far broader than is necessary to keep things manageable. Web filtering not only blocks pornography and spyware, but continues on to sports, entertainment, politics, and social networking. Group policies restrict users from changing their desktop colors or setting the system time. And the end result of using the standardization tools to intensively control computer usage results, most often, in IT working just as hard or harder to manage the exceptions to the rules (like Beth’s 72, above) than they would dealing with the tasks that the automation simplifies in the first place.

Restricting computer/internet use is driven by a management and/or IT assumption that the diverse, dynamic nature of computing is either a distraction or a problem. The opportunity to try something new is an opportunity to waste time or resources. By locking down the web; eliminating a user’s ability to install applications or even access settings, PC’s can be engineered back down to the limited functionality of the office equipment that they replaced, such as typewriters, calculators and mimeograph machines.

In this environment, technology is much more of a controlled, predictable tool. But what’s the cost of this predictability?

  • Technology is not fully appreciated, and computer literacy is limited in an environment where users can’t experiment.
  • Strategic opportunities that arise on the web are not noticed and factored into planning.
  • IT is placed in the role of organizational nanny, responsible for curtailing computer use, as opposed to enabling it.

Cash and resource-strapped, mission-focused organizations only need look around to see the strategic opportunities inherent in the web. There are an astounding number of free, innovative tools for activism and research. Opportunities to monitor discussion of your organization and issues, and meaningfully engage your constituents are huge. And all of this is fairly useless if your staff are locked out of the web and discouraged from exploring it. Pioneers like Beth Kanter understand this. They seek out the new things and ask, how can this tool, this web site, this online community serve our sector’s goals to ease suffering and promote justice? More specifically, can you end hunger in a community with a widget? Or bring water to a parched village via Twitter? If our computing environment is geared to stifle innovation at the cost of security, are we truly supporting technology?

As the lead technologist at my organization, I want to be an enabler. I want to see our attorneys use the power of the web to balance the scales when we go to court against far better resourced corporate and government counsel. In this era of internet Davids taking down Goliaths from the RIAA the the mainstream media, I don’t want my co-workers to miss out on any opportunities to be effective. So I need the flexibility and perspective to understand that security is not something that you maintain with a really big mallet, lest you stamp out innovation and strategy along with the latest malware. And, frankly, cleaning a case of the conflickr worm off of the desktop of an attorney that just took down a set of high-paid corporate attorneys with data grabbed from some innovative mapping application that our web-filtering software would have mistakenly identified as a gaming site is well worth the effort.

Flexibility has it’s own Return on Investment (ROI), particularly at nonprofits, where we generally have a lot more innovative thinking and opportunistic attitude than available budget. IT has to be an enabler, and every nonprofit CIO or IT Director has to understand that security comes at a cost, and that cost could be the mission-effectiveness of our organizations.

More RSS Tools: Sharing Feeds

This post was first published on the Idealware Blog in April of 2009.

For my last followup to my RSS article, Using RSS Tools to Feed your Information Needs, I want to discuss OPML, the standard for RSS Reader feed information, and talk a bit about why RSS, which is already quite useful, is about to become an even bigger deal. Last week, I discussed sharing research with Google Reader; before that, filtering RSS feeds with Yahoo! Pipes, and I started with a post about integrating content with websites.

Admitting that I might represent an extreme case, I subscribe to 96 feeds in Google Reader. I started with Google Reader last December – prior to that, I used a Mac RSS Reader called Vienna. Moving from Vienna to Google Reader might have been a chore, but it wasn’t, thanks to Outline Processor Markup Language (OPML). The short story on OPML is that it was developed as a standard format for outlining. While it is used in that capacity, it’s more commonly used as a format for collecting a list of RSS feeds, with last read pointers, that can then be processed by other feed-reading software. So, I exported all of my feeds from Vienna to a .opml file, then I imported that into Google Reader, and all of my feeds were instantly set up. If you run a WordPress blog, you can rapidly build your blogroll by importing an .opml file.

In addition to sharing feed information with applications, OPML can be used to share a group of feeds with a co-worker, friend or constituent. Say your org does advocacy on a particular issue, and you’ve collected a set of feeds that represent the best news and commentary on your issue. You could make the OPML file available on your web site for your constituents to incorporate in their readers.

At this point, you might be saying to yourself, “what are the odds that my constituents even know what a feed reader is? Wouldn’t making this available be more likely to confuse than help people?” As good as a question as that is, here’s why I think that you won’t be asking it soon. RSS has seen quick and steady adoption as a standard web service. Four years ago, it was obscure; today every content management system and web portal supports it. It features prominently in the strategic plans of tech giants like Google, Microsoft and Yahoo!. But it’s not as well-known by the general computing public — RSS still has yet to become a real household concept, like search and email have. The game-changer is underway, though. Last month, The Seattle Post-Intelligenser, one of Seattle’s primary daily papers, ceased print publication. The San Francisco Chronicle announced last month that they are making one last ditch effort, with a redesign and new printing presses, to stem the growing budget deficit that they face. Competition from TV and the web is driving newspapers out of business, and the hope that something will reverse this trend is thin.

As the internet becomes the primary source of news and opinion, RSS is a natural fit as the delivery medium. You can see that all of the Seattle PI sections are available as RSS feeds, and they have an option to customize the news and features that you see on your homepage. How long before they offer your customized paper as an OPML file, allowing you to instantly replicate your web experience in a reader?

In 1995, internet email was an arcane, technical concept. I figured out that I could send mail to an Internet address using my company’s MCI Mail account. My email address was 75 characters long. RSS may seem similarly oblique today, but it’s well on the road to being a mainstream method of internet information delivery. Your partners and constituents won’t just appreciate your support for it; they’ll start to expect it. I hope that my article and these follow-ups in the blog can serve as a good starting point for understanding what RSS can do and what you might do with it.