Monthly Archives: October 2008

Not all penguins are Tech-savvy

There was an interesting and disturbing article in today’s San Francisco Chronicle. Mind you, it’s an election year; there are lots of these. But this one hit a few of the hot spots in my consciousness – comic strips and technology. Berke Breathed, author of Bloom County, Opus and the short-lived Outland comic strips, was interviewed regarding the end of Opus. This Sunday will herald the last appearance of his long-lived penguin, a mainstay in each of the three strips. Breathed has a number of reasons for retiring, but among them was the following interesting assertion regarding his readership, or lack thereof:

“…I strolled into a college campus after three years of doing my strip, no one had ever read it. In fact they hadn’t read anything, unless it was something from 25 years ago that their parents had given them the books of. So I already saw that the window was closing, that it was just a matter of a few years.”

His target audience of 20-30 year olds, as far as he could tell, were completely disengaged from newspapers and, therefore, his work. But were those college students dutifully reading the paper ten years ago? Doubtful! Further, he threw some numbers and predictions out:

Breathed said his readership was 60 million to 70 million people in 1985, when Peanuts had a readership of 200 million to 300 million and Calvin and Hobbes, 200 million people. “That will never happen on the Web. Your readership drops to a couple thousand people – maybe, if you’re lucky, 10,000.”

As a big aficionado of newspaper strips, I find this very distressing, but I’m also a bit of a skeptic. I would suggest to Breathed that he is predicting the future based on a transitional phase. Newspapers, as it’s plain to point out, are having a difficult time transitioning to the web-based information world. I grabbed this article from sfgate.com, the online version of my daily paper. But I only visit that site to find specific articles or manage my vacation holds. My idea of an online newspaper is my.yahoo.com, igoogle.com or netvibes.com. Each of these sites lets me group together all sorts of information that is fairly akin to what I read in the newspaper, including comic strips. I’m a techie and an early adopter, but trends show RSS adoption growing steadily, and rss is really simple syndication, a concept that a cartoonist should latch right onto. I can grab any strip from GoComics.com as an RSS feed.

It is a different medium. It has the disadvantage that Breathed points out – a fraction of the people who are delivered his strips in the paper they purchase will willingly subscribe. But how many of those people read them anyway? I’ve gotten Cathy in my paper for as long as I can remember, but I promise you, I never read it. For now, as we transition, his actual readership is probably down. But comic strips are far from down from the count. On the web, we can subscribe to — and only to — the ones we want to read, and brilliant strips that struggle for readership will stay in circulation. This is a big improvement for the medium. It’s really too bad that Berkeley Breathed, one of our most talented practitioners, won’t stick around for it.

Hacking my Exchange Data onto my New G1

I’m the proud owner of a new T-Mobile G1 – UPS delivered it yesterday. The G1 is the first phone to use Google’s open source Android mobile operating system, and it rocks. This is the first true competitor to the iPhone, with a large touchscreen and a desktop-class web browser on a 3G network with WiFi, GPS and a flip out, full QWERTY keyboard. The G1 is particularly compelling if you use GMail, GTalk and Google Calendar – the integration, particularly with GMail, is phenomenal. The email is pushed to the phone, and the application for reading it is on a par with the standard web client – insanely easy to archive, label and delete messages. This is much better than the GMail for Mobile App that runs on other phones. The other compelling thing about Android, which I’ll blog more about at Idealware, is the open source OS and open programming environment. Android reeks with potential.

But, if what you’re looking for is a cool phone, it’s important to point out that this is brand new, and, as an early adopter, I’m paying some early adopter dues. If you aren’t the pioneering type, you’ll do much better with an iPhone. The Android environment is open, but the number of apps available is pretty slim, with some glaring holes. Missing on G1 Day 1 (which, officially, is today, October 22nd), there is no Notepad/Text Editor; limited video playing, no secured storage (for passwords and the like) and very limited connectivity with Microsoft Exchange/Outlook. There’s no desktop sync program for Android — you can mount the phone as USB storage and drag files to and from it, but the only synchronization available, so far, is the built-in sync with GMail apps (Mail, Calendar and Contacts) and a couple of brand new apps that can sync contacts with Exchange, given the right conditions.

My situation is this: I work in a Microsoft environment. We run Exchange 2007. I have an active extra-curricular professional life that lives in GMail and Twitter, primarily. So the G1 handles the latter beautifully — there are already three Twitter apps available — but the web site works great as well. It handles GMail phenomenally. But what about my work email, calendar and contacts? Solutions should pop up eventually. Funambol is promising an ad-based service that will start with Contact Sync, then grow to include Calendar and Email. A Google ContactSync app is available at the Android Market (you can install it from your phone), but it requires Exchange 2007 with the Web Services Extension enabled. We’re not doing that at Earthjustice, and I made a vow not to ask my Sysadmin to reconfigure the server for me (she’s got enough to do!). Finally, Google does have a Calendar Sync app, but it only works on Windows; I’m on a Mac, and while I have VMWare Fusion and Windows installed, I only boot up Windows when I have to, not often enough to keep the calendar up to date. So here’s what I’ve done, which is immensely kludgy.

Email: I used an Administrator-only feature to forward a copy of my mailbox to GMail. If you aren’t, like me, an IT Director with admin rights to your Exchange server, you’ll have to buy the System Administrator a healthy Amazon gift certificate and grovel a bit, most likely. On the Gmail side, I created a filter that labels each message from work with “earthjustice” and set up my EJ email address as a valid one to reply with, along with the “reply to address sent to” default. Now all of my work mail arrives twice – once in Outlook, once in GMail. I am hesitant about replying in GMail, because the Sync is only one way, and those replies won’t land in my Outlook Sent folder. But I get all of my mail pushed, so I don’t miss anything, and I can always jump to Outlook Web Access if I want to reply “in country”.

Calendar: this was a real kludge. Again, if I used Windows daily, I’d use the Calendar Sync. But I use my Macbook at home and work and generally log onto Outlook over Citrix, which I can’t install the sync on without installing it for the whole company. I worked out a complicated solution by publishing my calendar in icalendar format to iCal Exchange, a free server for storing calendars, then subscribed to it at Google Calendar, only to learn that either iCal Exchange is not sending the proper refresh headers to GCal, or GCal is inept at refreshing them. I couldn’t get it to recognize an update in three days, so I ditched that plan. But then I noted that, when I received Outlook appointments at GMail, they came with “Add to GCal” options. Since my Calendar was synched (via Google Calendar Sync on my Fusion WinXP desktop), I realized that I can just accept each appointment twice to keep both calendars in sync. Again, kludgy, but suitable until something better comes along.

Contacts: As mentioned above, there’s a contact sync app available, but it requires Exchange 2007 with web services enabled. I’m going to hold off. I have about 200 work contacts, and about 350 more personal/Nonprofit contacts, so my GMail contacts list is much larger than the one at work. I’m going to maintain them separately for the time being. So, no definitive answer here, but keep your eye on Funambol, who promise to have this going quickly.

It’s only a matter of time before someone licenses and resells Microsoft Activesync for Android, and other sync options will pop up like crazy. But, if you’re like me, and couldn’t wait for this phone, I hope there’s enough here to get you going. Please be sure to leave additional and better ideas in the comments.

Biting The Hand – Conclusion

This article was originally published on the Idealware Blog in October of 2008.

This is the final post in a three part series on Microsoft.  Be sure to read Part 1, on the history/state of the Windows operating system, and Part 2, on developing for the Microsoft platform.

Two More Stories – A Vicious Exchange

In late 2006, I moved an organization of about 500 people from Novell Groupwise to Microsoft Exchange 2007.  After evaluating the needs, I bought the standard edition, which supported message storageup to 16GB (Our Groupwise message base took up about 4GB).  A few days after we completed the migration, which included transferring the Groupwise messages to Exchange, an error popped up in the Event Viewer saying that our message store was larger than the 16GB limit, and, sure enough, it was – who knew that Microsoft messages were so much larger than Groupwise messages?

The next day, Event Viewer reported that our message store was too large and that it would be dismounted at 5:00 am, meaning that email would be, essentially, disconnected.  Huh?  I connected remotely the next morning and remounted at about 5:10.  I also scoured the Microsoft Knowledgebase, looking for a recommendation on this, and found nothing.  I called up my vendor and ordered the Enterprise version of Exchange, which supports a much larger message store.  A couple of days later, same thing.  My new software hadn’t arrived yet.  The next day, the message changed, saying that our message store was too large and would be dismounted randomly! What!?  This meant that the server could go down in the middle of the business day.  The software arrived, and I tossed the box on my desk and scheduled to come in on Sunday (which happened to be New Year’s Day, 2007) to do the upgrade. But when I opened the box, I discovered that my vendor had sent me Enterprise media, yes, but it was for Exchange Server 2005, the prior version.  I was hosed.

Frantic, I went to Google instead of the knowledge base and searched.  This yielded a blog entry explaining that, with Exchange Server 2007 Service Pack 2 (which I had applied as part of the initial installation), it was now legal to have message stores of up to 75GB.  All I had to do was modify a registry entry on the server – problem solved.  Wow, who woulda thunk?  Particularly if this had been documented anywhere on the Microsoft Knowledgebase?

But here’s my question: What Machiavellian mind came up with the compliance enforcement routine that I experienced, and why was my business continuity threatened by code designed to stop me from doing something perfectly legal under the Service Pack 2 licensing?  This was sloppy, and this was cruel, and this was not supportive of the customer.

Cheap ERP

In early 2007, I hired a consultant to help with assessing and developing our strategic technology plan.  This was at a social services agency, and one of our issues was that, since we hired our clients, having separate databases to track client progress and Human Resources/Payroll resulted in large amounts of duplicate data entry and difficult reporting. The consultant and I agreed that a merged HR/Client Management system would be ideal. So, at lunch one day, I nearly fell off my chair laughing when he suggested that we look at SAP.  SAP, for those who don’t know, is a database and development platform that large companies use in order to deploy highly customized and integrated business computing platforms.  Commonly referred to as Enterprise Resource Planning (ERP) software, it’s a good fit for businesses with the unique needs and ample budgets to support what is, at heart, an internally developed set of business applications.  The reason I found this so entertaining was that, even if we could afford SAP, then hiring the technical staff to develop it into something that worked for us would be way beyond our means.  SAP developers make at least six figures a year, and we would have needed two or more to get anywhere in a reasonable amount of time.  It’s unrealistic for even a mid-sized nonprofit to look at that kind of investment in technology.

So Microsoft holds a unique position — like SAP, or Oracle, they offer a class of integrated products that can run your business.  Unlike SAP or Oracle, they’re pretty much what they are – you can customize and integrate them, at a cost, but you can’t, for instance, extend Microsoft’s Dynamics HR package into a Client Management System.  But, if you have both Dynamics and Social Solutions, which runs on Microsoft SQL Server, you’d have a lot more compatibility and integration capabilities than we had at our social services org, where our HR system was outsourced and proprietary and the client management software ran on Foxpro.

Bangs for the Buck

So this is where it leaves me – Micosoft is a large, bureaucratic mess of a company that has so many developers on each product that one will be focusing on how to punish customers for non-compliance while another is making the customers compliant.  Their product strategy is driven far less by customer demands than it is by marketing strategy.  Their practices have been predatory, and, while that type of thing seems to be easing, there’s still a lot of it ingrained in their culture.  When they are threatened — and they are threatened, by Google and the migration from the desktop to the cloud — they’re more dangerous to their developers and customers, because they are willing to make decisions that will better position them in the market at the cost of our investments.

But Microsoft offers a bargain to businesses that can’t — and shouldn’t – spend huge percentages of their budget on platform development.  They do a lot out of the box, and they have a lot of products to do it with.  Most of their mature products — Office, Exchange, SQL Server — are excellent.  They’re really good at what they do.  The affordable alternative to the commercial ERP systems like SAP and Oracle is open source, but open source platforms are still relatively immature.  Building your web site on an open Source CMS powered by PHP or Ruby on Rails might be a good, economical move that leaves you better off, in terms of ease of use and capabilities, than many expensive commercial options.  But going  open source for Finance, HR and Client Tracking isn’t really much of an option yet.  The possibly viable alternatives to Microsoft are commercial outsourcers like NetSuite, but how well they’ll meet your full needs depends on how complex those needs are – one size fits all solutions tend to work better for small businesses than medium to large ones.

Finally, it’s all well and good to talk about adopting Microsoft software strictly on its merits, but, for many of us, it has far more to do with the critical, non-Microsoft applications we run that assume we’re using their products.  For many of us, considering alternatives like Linux for an operating system; Open Office or Google Apps for productivity; or PHP for our web scripting language are already nixed because our primary databases are all in SQL Server and ASP.  At the law firm where I work, we aren’t about to swap out Word for an alternative without the legal document-specific features that Microsoft richly incorporates into their product.  But it leaves me, as the technology planner, in a bit of a pickle. Windows XP, Office 2003/2007, Exchange 2007, SQL Server 2007, and Windows Server 2003 are all powerful, reliable products that we use and benefit from, and the price we paid for them, through Techsoup and their charity licensing, is phenomenal.  But as we look at moving to web-based computing, and we embark on custom development to meet information management and communication needs that are very specific to our organization, we’re now faced with adopting Microsoft’s more dynamic and, in my opinion, dangerous technologies.

This would all be different if I had more reason to trust the vendor to prioritize my organization’s stability and operating efficiency over their marketing goals.  Or, put differently, if their marketing philosophy was based less on trying to trump their competition and more on being a dependable, trustworthy vendor.  They’re the big dog, just about impossible to avoid, and they make a very compelling financial argument — at first take — for nonprofits.  But it’s a far more complicated price break than it seems at first glance.

Biting the Hand Part 2

This article was originally posted on the Idealware Blog in October of 2008.

This is part two of a three part rumination on Microsoft.  Today I’m discussing their programming environment, as opposed to the open source alternatives that most nonprofits would be likely to adopt instead.  Part one, on Windows, is here:http://www.idealware.org/blog/2008/10/biting-hand-that-bites-me-as-it-feeds.html

Imposing Standards

In the early days of personal computing, there were a number of platforms – IBM PC, Apple Macintosh, Amiga, Commodore, Leading Edge… but the first two were the primary ones getting any attention from businesses. The PC was geeky, with it’s limited command line interface; the Macintosh was cool with it’s graphics. But two things put the PC on top. One was Dan Bricklin’s VisiCalc, the first spreadsheet.  A computer is a novelty if you have no use for it, and VisiCalc, as it’s modern equivalents are today, was extremely useful. But the bigger reason why the PC beat out the Mac so thoroughly was that the Mac had a strict set of rules that had to be followed in order to program for it, whereas anyone could do pretty much anything on a PC.  If you knewAssembler, the programming language that spoke to the machine, you could start there and create whatever you wanted, with no one at IBM telling you with languages and libraries to use, or what you were allowed or not allowed to do.  As Windows has matured and gained the bulk of the desktop operating system market, Microsoft has started emulating Apple, raising the standards and requirements for Windows programming in ways that make it far less appealing to developers.

Unlike the early days, when no one had much market share, Windows is now the standard  business platform, so there are a lot more reasons to play by whatever rules Microsoft might impose.  So, today, being a Windows programmer is a lot like being a Mac programmer.  If you’re going to have the compatibility and certification that is required, you’re going to follow guidelines, use the shared libraries, and probably program in the same tools as every other Windows programmer.  The benefit is standardization and uniformity, things that business computer users really appreciate.

Accordingly, the Microsoft platform, which used to run on pretty much all PCs, now faces competition from Linux and other Unix variants, and for much the same reasons that IBM beat out Apple in those early days. What appeals to Java, PHP, Rails and other open source developers is very much the same thing that brought developers to the PC in the first place, and Microsoft’s arguments for sticking to their platform are much like Apple’s – “it’s safer, it’s well-supported, it’s standardized, so a lot of the work is done for you”.  I would argue with each of these claims.

Is it Safer?

The formal programming environment is supposedly more secure, with compiled code and stricter encoding/encryption of data in their web services model.  But it seems that the open source model, with, for the major apps, a multitude of eyes on the code, isquicker to fnd and fix security glitches.  Microsoft defenders will argue that, because Microsoft lives in a commercial ecosystem, with paid training and support, that support is more widely available and will continue to be avaailable, whereas open source support and training is primarily community-based and uncompensated. But my experience has been that finding forums, how-to’s and code samples for PHP, Python and Rails has always been far easier than finding the equivalent for ASP and C#.  In the open surce world, all code is always available; in the MS world, you either buy it or you pay someone to teach you.

Is it Easier?

The bar for programming on Microsoft’s platform is high. To create a basic web application on the Microsoft platform, or to extend an existing application that supports their web programming standards, you, at a minimum, need to know XML; a scripting language such as Visual Basic or C#; and Active Server Pages (ASP).  Modern scripting languages like Ruby on Rails and PHP are high level and relatively easy to pick up; RAILS, in particular, supports a rapid application develoment model that can have a functional application built in minutes.  These languages support REST, a simple (albeit less secure) way of transmitting data in web applications.  Microsoft depends on SOAP, a more formal and complex method. A good piece on REST versus SOAP links here.

Is it Standardized?

Well, this is where I really have the problem.  MS controls their programming languages and environments.  If you develop MS software, you do it in MS Visual Studio, and you likely code in a C variant or Visual Basic, using MS compilers.  Your database is MS SQL Server. Visual Studio and SQL Server are powerful, mature products – I wouldn’t knock them.  But Microsoft has been blazing through a succession of programming standards that a cheetah couldn’t keep up with over the years, revamping their languages rapidly, changing the recommended methods of connecting to data; And generally making the job of keeping up with their expectations unattainable. So while their platform uses standardized code and libraries in order for developers to produe applications with standardized interfaces, the development tools themselves are going through constant, dramatic revisions, far more disruptive ones than the more steady, well-roadmapped enhancing of the open source competition.

Mixed Motives

The drivers for this rapid change are completely mixed up with their marketing goals.  For example, MS jumped on the web bandwagon years ago with a product called Frontpage.  Frontpage was a somewhat simplistic GUI tool for creating web pages, and it was somewhat infamous for generating web sites that were remarkably uniform in nature.  It was eclipsed completely by what is now Adobe’s Dreamweaver.  If you try and buy Frontpage today, you’ll have a hard time finding it, but it didn’t go away, it was simply revised and rebranded. Frontpage is now called “Sharepoint Designer“.  It’s a product that Microsoft recommends that Sharepoint administrators and developers use to modify the Sharepoint interface.  Mind you, most of your basic Sharepoint modifications can be made from within Sharepoint, and anything advanced can and should be done in Visual Studio.  There’s no reason to use this product, as most of what it does is easier to do elsewhere.

So it comes down to time, money and risk.  The MS route is more complex and more expensive than the open source alternatives.   The support and training is certified and industrialized. All of this costs money – the tools, the support, the code samples, and the developers, who generally make $80-150k a year.  The platform development is driven by the market, which leads to a question about it’s sustainability in volatile times for the company.  As concluded in part one, Microsoft knows that the bulk of their products will be obsolesced by a move to Software as a Service.  The move from O S-based application development to web development has been rocky, and it’s not close to finished.

Look for part 3 sometime next week, where I’ll tie this all up.

From Zero to Sixty: What type of Project Management tool is appropriate?

Here’s another recent Idealware entry (from 9/25/2008). Note that the Idealware post has a healthy comment stream.

It seems like every month or two, I happen across a forum thread about project management tools. What works? Can you do it with a wiki? Are they necessary at all? Often, there are a slew of recommendations (Basecamp, Central Desktop, MS Project) accompanied by some heartfelt recommendations to stay away from all of them. All of these recommendations are correct, and incorrect.

Project software naysayers make a very apt point: Tools won’t plan a project for you. If you think that buying and setting up the tool is all that you need to do to successfully complete a complex project, you’re probably doomed to fail. So what are the things that will truly facilitate a project-oriented approach, regardless of tools?

  • Healthy Communication. The team on the project has to be comfortably and consistently engaged in project status and decisions
  • Accountability. Team members need to know what their roles are, what deliverables they’re accountable for and when, and deliver them.
  • Clarity, Oversight and Buy-In. Executives, Boards, Backers all have to be completely behind the project and the implementation team.

With that in place, Project Management tools can facilitate and streamline things, and the proper tools will be the ones that best address the complexity of the project, the make-up of the team, and the culture of the team and organization.

Traditional Project Management applications, exemplified by MS Project, tie your project schedule and resources together, applying resource percentages to timeline tasks. So, if your CEO is involved in promoting the plan and acting as a high level sponsor, then she will
be assigned, perhaps, as five percent of the project’s total resources, and her five percent will be sub-allocated to the tasks that she is assigned to. They track dependencies, and allow you to shift a whole schedule based on the delay of one piece of the plan. If task 37 is
“order widget” and that order is delayed, then all actions that depend on deployment of the widget can be rescheduled with a drag and drop action. This is all very powerful, but there is a significant cost to defiing the plan, initially inputting it, and then maintaining the information. There’s a simple rule of thumb to apply: If your project requires this level of tracking, then it requires a full-time Project Manager to track it. If your budget doesn’t support that, as is often the case, then you shouldn’t even try to use a tool this complex. It will only waste your time.

Without a dedicated Project Manager, the goal is to find tools that will enhance communication; keep team members aware of deadlines and milestones; report clearly on project status; and provide graphical and summary reporting to stakeholders. If your team is spread out geographically, or comprised of people both inside and outside of your organization, such as consultants and vendors, all the better if the tool is web-based. Centralized plan, calendar, and contacts are a given. Online forums can be useful if your culture supports it. Most people aren’t big on online discussions outside of email, so you shouldn’t put up a forum if it won’t be used by all members. The key is to provide a big schedule that drills down to task lists, and maintain a constant record of task status and potential impacts on the overall plan. Gantt Charts allow you to note key dependencies – actions that must be completed before other actions can begin — and provide a visual reporting tool that is clear and readable for your constituents, from the project sponsors to the public. Basecamp, Central Desktop, and a slue of web-based options provide these components.

If this is still overkill – the project isn’t that complex, or the team is too small and constricted to learn and manage the tools, then scale down even further. Make good use of the task list and calendar functions that your email system provides, and put up a wiki to facilitate project-related communication.

What makes this topic so popular is that there is no such thing as a one size fits all answer, and the quick answer (“Use Project”) can be deadly for all but the most complex projects. Understand your goals, understand your team, and choose tools that support them.

Biting The Hand That Bites Me As It Feeds Me? Part 1

This article was first published on the Idealware Blog in October of 2008.

Like many of us, I’ve been using Microsoft products for a long, long time, and I appreciate many of them. Microsoft is a significant vendor, period, with their dominance in operating systems, productivity applications and, well, most everything else software-related. But they are a particularly compelling vendor in the nonprofit sector. I’ve often noted that, should Microsoft ever take over the world, they would at least be benevolent dictators. As evidence, we have the great work of the Bill and Melinda Gates Foundation and their long-established support for charitable institutions, by way of their affordable licensing to 501(c)3’s and generous amount of software donations, via Techsoup and otherwise.

But, hey, I wouldn’t be heaping all of this praise on them if I didn’t have a few criticisms, and these are criticisms developed through the eyes of a long-time technology strategist. In addition to using Microsoft software, I’ve beta-tested it, I’ve developed on the platform, and I’ve been deploying it since the early 1990’s. I’ve run into my share of frustrations. 20 or so years into my relationship with this vendor, I still find myself ridiculously attracted to and repelled by their applications and platforms. So I want to spend a few blog posts talking about one IT Director’s history with Microsoft, and some of the lessons learned and ongoing concerns I have with the software vendor that I’ve had the longest, largest relationship with in my career.

To start, I just want to let off a little steam about the flagship product, Windows. I’ll follow this up over the next few weeks discussing their development environment and general philosophy, before tying it up with some commentary about the choices we, as nonprofits, can make. There will be no bold conclusion here, or any strong recommendation to dive in or stay away from their products – there are compelling arguments for doing both. But my hope is that I can ruminate a little bit on the main challenges I see in dealing with them.

Early Predatory Business Practices

My IT career started about 1991 at a small San Francisco Law Firm. Vendor lock-in was big at law firms in the 90’s, but the vendor was Wordperfect Corp., not Microsoft. Attorneys swore by Wordperfect for DOS, and bought anything else that they sold. So, when Novell, our networking software vendor, bought DR-DOS and released it as a free, feature-rich MS-DOS replacement for Novell customers, it made little sense not to use it. This proved to be my first — and early — introduction to Microsoft’s established predatory practices. MS wrote code into a beta of Windows 3.1 which basically said “If this isn’t running on top of MS-DOS, tell the user that it will crash.” This became known as the AARD Code, and it wasn’t removed from the actual Windows 3.1 release; just modified to not display.

Microsoft, famously, either engaged in or was highly assumed to be engaging in numerous other unfair practices, from hiding code in Word and Excel that made them faster and stabler on Windows than competing applications, to stealing product ideas, and to leveraging their desktop operating system dominance in order to put competing products out of business, the latter practice being one that they were found guilty of and fined for. The Consumer Federation of America has an excellent article summarizing and referencing all of the real and alleged abuses leading up to the ruling.

Today, Microsoft seems to be a bit less arrogant and more on the up and up. They’ve made conciliatory moves in the open source community, as their dominance in the industry seems to be waning a bit. This is largely in the realm of new technology, such as Software as a Service (SaaS) offerings. On the traditional servers and desktops, they’re still everywhere.

Dragging the Legacy Behind Them

In the 80’s and 90’s, Microsoft displayed a brilliant talent for buying or building the right thing at the right time. I might have deployed a competing version of DOS, but I’m one of the very few. They hit the zeitgeist with Windows, which borrowed heavily from the Apple Macintosh (which, in turn, was inspired by work done at Parc Xerox), but the combination of a GUI on top of DOS — an OS that could run on any number of manufacturer’s PCs — allowed it to soar to the top of the market share very quickly.

Note that MacOS was a graphical OS, whereas Windows was a graphical application that ran on top of DOS, a character-based OS. And even when Microsoft claimed to let go of that legacy with the release of Windows NT (NT for “New Technology”) it wasn’t fully severed. I found this out when I hired a consultant to help me with a crashed SCSI drive on a critical server. He brought the wrong replacement drive, but decided that there might be something else that he could do. In order to test his theory, he renamed the “C:” drive on my Windows NT 4.0 server to “E:”, after which, the server could not boot. I couldn’t help wondering why an operating system that had dropped it’s legacy roots was still crippled by as artificial a limitation as the boot drive requiring the label “C:”.

Decades Later…

If this were all ancient history, it would be fine, but when we look at Vista, which was pitched to us as a revolutionary new version of Windows, much like Windows 95 and Windows NT before it, what we really see is a revamped graphical interface on top of a ton of code, some of it likely dating back at least 15 years. They’ve rewritten the graphics and they’ve modified the basic code to operate at 64 bits, but they’ve included all of the 32 bit code as well, to support all of the “legacy applications” that are still being developed. And, believe me, there’s 16 bit (Windows 3 compatible) and 8 bit (DOS-compatible) code in there as well.

This isn’t a Mac vs Windows article, but there is one thing that Apple has done extremely right that Microsoft should really consider. When MacOS grew to version 9, they canned it. They took a healthy Unix variant (BSD), built a brand new GUI on it, and, while they included OS9-compatible code in the initial release, it was isolated from the current code base – you had to boot specially into it. This allowed Apple to create a modern GUI that does everything Windows does more on a faster and stabler platform. Early reports on Windows 7 — which Microsoft is rushing out in order to recoup from the damage that Vista has done — are that it’s pretty much Vista with a face lift. They’re not letting go of the legacy, and, as a result, they’re asking us to buy faster systems with much more memory in order to subsidize their rushed and sloppy development choices. It doesn’t bode well.

But, while Microsoft is continuing down the same superhighway that the Windows code has traveled since the early days of DOS, they are developing their own competition online. The question is, have they decided that investing in actually fixing Windows is too little and too late a strategy in the face of cloud computing and SaaS? It’s an interesting question, because it implies that our choices for staying with the vendor will mean living with Windows, as it is, or trusting Microsoft as an outsourced provider of computing infrastructure. That’s a proposition that I find easier to take from vendors who haven’t tried to lock me out of competing products in the past.

Next up

But this is only the beginning of the discussion. I’ll continue next week on their history with developers and their general philosophy of software, and I’ll come back to this point in the conclusion. Part 2 is here.