This post originally appeared on the Idealware Blog in April of 2009.
Last week I had the thrill of visiting a normally closed-to-the-public Science Building at UC Berkeley, and getting a tour of the lab where they examine interstellar space dust collected from the far side of Mars. NASA spent five or six years, using some of the best minds on the planet and $300,000,000, to develop the probe that went out past Mars to zip (at 400 miles a second) through comet tails and whatever else is out there, gathering dust. The most likely result of the project was that the probe would crash into an asteroid and drift out there until it wasted away. But it didn’t, and the scientists that I met on Saturday are now using these samples to learn things about our universe that are only speculative fiction today.
So, what does NASA know that we don’t about the benefits of taking risks?
In my world of technology management, it seems to be primarily about minimizing risk. We do multiple backups of critical data to different media; we lock down the internet traffic that can go in and out of our network; we build redundancy into all of our servers and systems, and we treat technology as something that will surely fail if we aren’t vigilant in our efforts to secure it. Most of our favorite adages are about avoiding risk: “It it ain’t broke, don’t fix it!” and “Nobody was ever fired for buying IB.. er, MicroSoft.”
On Monday, I’ll be presenting on my chapter of NTEN‘s Book “Managing Technology to Meet Your Mission” at the Nonprofit Technology Conference in San Francisco. My session, and chapter, is about mission-focused technology planning and the art of providing business-class systems on a nonprofit budget. That’s certainly about finding sustainable and dependable options, but my case is that nonprofits, in particular, need to identify the areas where they can send out those probes and gamble a bit. For many nonprofits, technology planning is a matter of figuring out which systems desperately need upgrading and living with a lot of systems and applications that are old and semi-functional. My case is that there’s a different approach: we should spend like a regular business on the critical systems, but be creative and take risks where we can afford to fail a bit, on the chance that we’ll get far more for less money than we would playing it “safe” with inadequate technology. It’s a tough sell, yes, but I frame it in my belief that, when your business is changing the world, your business plan has to be bold and creative. As I mention often, the web is, right now, a platform rife with opportunity. We will miss out on great chances to significantly advance our missions if we just treat it like another threat to our stability.
We need stable systems, and we often struggle with inadequate funding and the technical resources simply to maintain our computer systems. I say that, as hard as that is, we need to invest in exploration. It’s about maximizing potential at the same time as you minimize risk. And its all about the type of dust that you want to gather.