Tag Archives: open source

How Easy Is It For You To Manage, Analyze And Present Data?

apple-256262_640I ask because my articles are up, including my big piece from NTEN’s Collected Voices: Data-Informed Nonprofits on Architecting Healthy Data Management Systems. I’m happy to have this one available in a standalone, web-searchable format, because I think it’s a bit of a  signature work.  I consider data systems architecture to be my main talent; the most significant work that I’ve done in my career.

  • I integrated eleven databases at the law firm of Lillick & Charles in the late 90’s, using Outlook as a portal to Intranet, CRM, documents and voicemail. We had single-entry of all client and matter data that then, through SQL Server triggers, was pushed to the other databases that shared the data.  This is what I call the “holy grail” of data ,entered once by the person who cares most about it, distributed to the systems that use it, and then easily accessible by staff. No misspelled names or redundant data entry chores.
  • In the early 2000’s, at Goodwill, I developed a retail data management system on open source (MySQL and PHP, primarily) that put drill-down reporting in a web browser, updated by 6:00 am every morning with the latest sales and production data.  We were able to use this data in ways that were revolutionary for a budget-challenged Goodwill, and we saw impressive financial results.

The article lays out the approach I’m taking at Legal Services Corporation to integrate all of our grantee data into a “data portal”, built on Salesforce and Box. It’s written with the challenges that nonprofits face front and center: how to do this on a budget, and how to do it without a team of developers on staff.

At a time when, more and more, our funding depends on our ability to demonstrate our effectiveness, we need the data to be reliable, available and presentable.  This is my primer on how you get there from the IT viewpoint.

I also put up four articles from Idealware.  These are all older (2007 to 2009), they’re all still pretty relevant, although some of you might debate me on the RSS article:

This leaves only one significant piece of my nptech writing missing on the blog, and that’s my chapter in NTEN’s “Managing Technology To Meet Your Mission” book about Strategic Planning. Sorry, you gotta buy that one. However, a Powerpoint that I based on my chapter is here.

The Increasing Price We Pay For The Free Internet

The Price of Freedom is Visible HerePicture : Rhadaway.

This is a follow-up on my previous post, A Tale Of Two (Or Three) Facebook Challengers. A key point in that post was that we need to be customers, not commodities.  In the cases of Facebook, Google and the vast majority of free web resources, the business model is to provide a content platform for the public and fund the business via advertising.  In this model, simply, our content is the commodity.  The customer is the advertiser.  And the driving decisions regarding product features relate more to how many advertisers they can bring on and retain than how they can meet the public’s wants and needs.

It’s a delicate balance.  They need to make it compelling for us to participate and provide the members and content that the advertisers can mine and market.  But since we aren’t the ones signing the checks, they aren’t accountable to us, and, as we’ve seen with Facebook, ethical considerations about how they use our data are often afterthoughts.  We’ve seen it over and over, and again this week when they backed off on a real names policy that many of their users considered threatening to their well-being.  One can’t help but wonder, given the timing of their statement, how much new competitor Ello’s surge in popularity had to do with the retraction. After all, this is where a lot of the people who were offended by the real names policy went.  And they don’t want to lose users, or all of their advertisers will start working on Ello to make the Facebook deal.

Free Software is at the Heart of the Internet

Freeware has been around since the ’80’s, much of it available via Bulletin Boards and online services like CompuServe and AOL. It’s important to make some distinctions here.  There are several variants of freeware, and it’s really only the most recent addition that’s contributing to this ethically-challenged business model:

  • Freeware is software that someone creates and gives away, with no license restrictions or expectation of payment. The only business model that this supports is when the author has other products that they sell, and the freeware applications act as loss leaders to introduce their paid products.
  • Donationware is much like Freeware, but the author requests a donation. Donationware authors don’t get rich from it, but they’re usually just capitalizing on a hobby.
  • Freemium is software that you can download for free and use, but the feature set is limited unless you purchase a license.
  • Open Source is software that is free to download and use, as well as modify to better meet your needs. It is subject to a license that mostly insures that, if you modify the code, you will share your modifications freely. The business model is usually based on providing  training and support for the applications.
  • Adware is free or inexpensive software that comes with advertising.  The author makes money by charging the advertisers, not the users, necessarily.

Much of the Internet runs on open source: Linux, Apache, OpenSSL, etc. Early adopters (like me) were lured by the free software. In 1989, I was paying $20 an hour to download Novell networking utilities from Compuserve when I learned that I could get a command line internet account for $20 a month and download them from Novell’s FTP site. And, once I had that account, I found lots more software to download in addition to those networking drivers.

Adware Ascendant

Adware is now the prevalent option for free software and web-based services, and it’s certainly the model for 99% of the social media sites out there.  The expectation that software, web-based and otherwise, will be free originated with the freeware, open source and donationware authors. But the companies built on adware are not motivated by showing off what they’ve made or supporting a community.  Any company funded by venture capital is planning on making money off of their product.  Amazon taught the business community that this can be a long game, and there might be a long wait for a payoff, but the payoff is the goal.

Ello Doesn’t Stand A Chance

So Ello comes along and makes the same points that I’m making. Their revenue plan is to go to a freemium model, where basic social networking is free, but some features will cost money, presumably business features and, maybe, mobile apps. The problem is that the pricing has to be reasonable and, to many, any price is unreasonable, because they like being subsidized by the ad revenue. The expectation is that social media networks are free.  For a social network to replace something as established as Facebook, they will need to offer incentives, not disincentives, and, sadly, the vast majority of Facebook users aren’t going to leave unless they are severely inconvenienced by Facebook, regardless of how superior or more ethical the competition is.

So I don’t know where this is going to take us, but I’m tired of getting things for free.  I think we should simply outlaw Adware and return to the simple capitalist economy that our founders conceived of : the one where people pay each other money for products and services. Exchanging dollars for goods is one abstraction layer away from bartering. It’s not as complex and creepy as funding your business by selling the personal information about your users to third parties.  On the Internet, freedom’s just another word for something else to lose.

Drupal 101: More on Modules

This post originally appeared on the Idealware Blog in October of 2009.
Last week, I kicked off this series on setting up a basic web site with Drupal, the popular open source Content Management System. This week we’re going to take a closer look at Modules, the Drupal add-ons that can extend your web site’s functionality. One of the great things about Drupal is that it is a popular application with a large developer community working with and around it. So there are about a thousand modules that you can use to extend Drupal, covering everything from document management to payment processing. The good news: there’s probably one that supports the functionality that you want to add to your web site. Bad news: needle in a haystack?

A potentially easier way to add extra functionality to Drupal is to download a customized version, such as CiviCRM or Open Atrium. We’ll discuss those options later in the Drupal 101 series.

Core Modules

Drupal comes with a number of built-in modules that you can optionally enable. Some are obviously useful, others not so much. Here are some notes on the ones that you might not initially know that you need:

Primary content types like blog, forum and book offer different modules for user input. They can be combined, or you can pick one for a simple site. Since the differences between, say , a blog (individual journal that people can comment on) and a forum (topical posts that people can reply to) are less distinct than they are in other CMS’s, you might want to pick one or two primary content types and then supplement them with more distinctive ones, such as polls or profiles.Enabling contact allows your users to send private messages to each other on the site, as well as allowing you to set up site-wide contact forms.OpenID allows your users more flexibility and control as to how they log into your site. I can’t see a good reason not to enable this on a public site. Since more and more people have profiles on social networking sites and Google, tools like Facebook Connect or Google Friend Connect should be considered as well.

By default, Drupal asks new users for a name and email, but not much else. With the Profiles module, you can create custom fields and allow your users to share information much as they would on a social network.

Taxonomy is also recommended, and I’ll talk more about that next week.

Throttle should be used on any high-traffic site to improve performance.

Use Trigger if you want to set up alerting and automation on your site.

Add-on modules, must haves:

CCK (Content Construction Kit) More than some CMS’s, Drupal is a content-centric system. It doesn’t simply manage content, but the web interface is structured around the content it manages: content types, content metadata (taxonomies), content sources (RSS feeds). Out of the virtual box, Drupal has content types like blog entries, pages and stories. Each content type has a data entry form associated with it. So, if you create a number of stories, and you want to read them all, then you can browse to the page “story” and they’ll all be listed there. CCK helps you create additional content types and use a fairly robust form-builder to customize the screens.Views

The Views module lets you customize the appearance and functionality of many of Drupal’s standard screens, and to add your own. Unlike CCK, which is limited to the default layout of content types, Views lets you seriously customize the interface. One easy reason to install Views is in order to take advantage of the Calendar view, which gives you not only a full page, graphical calendar to add events to and display, but also sidebar calendar widgets and upcoming event lists.

Here’s a tip: setting up the calendar view is reasonably tedious. The best write-up explaining it (for Drupal 6) is here: http://drupal.org/node/326061. Drupal’s documentation is okay, but this is step-by-step. It does miss one step, though, which is to add the “Event Date – From date” and “Event Date – To date” to the Fields listing (with friendlier titles, like “From” and “To”). Otherwise, calendar items show on the day they were submitted instead of the day that they are occurring.


There’s a good case to be made that these two modules should be folded into Drupal’s base package, because, in addition to providing very powerful customization features to the core product, there are a whole slew of additional modules that require their presence. If you plan to install a number of modules and/or customize your site, these are pretty much pre-requisites, so just grab and install them.


WYSIWYG EditorsWhat-You-See-Is-What-You Get, or Rich Text Format (RTE) editors transform Drupal’s default data input boxes into flexible editors with Word-like toolbars. The WSYIWYG module lets you install the editor of your choice. I’ve done well with FCKEditor (recently rebranded CKEditor, thank you!). The WYSIWYG module lets you work with multiple RTE packages and strategically assign them to different fields and content types. Most RTE editors are very configurable, but note that, in addition to installing the modules, you need to install the editors themselves, so follow the instructions carefully.Organic Groups

If you’re building a community site, with hopes of having lots of interactive, social features, Organic Groups gives you the flexibility to not only create all sorts of groups and affiliations on your own, but let your users create their own groups as well, much like Facebook does. For an interactive site, this is essential.


Many modules are available for either integrating with Authorize.net or Paypal, or setting up your own e-commerce site. The aptly named e-Commerce module and Ubercart are among the better known and supported options.

Drupal fans: what modules do you recommend? Which do you install first? Leave your recommendations in the comments.

Next week, we’ll talk about menus, blocks and taxonomies: Drupal 101: Navigation.

Drupal 101

I’ve been doing a lot of work with the open source content management system Drupal lately, and thought I’d share some thoughts on how to get a new site up and running. Drupal, you might recall, got high ratings in Idealware’s March ’09 report comparing open source content management systems. Despite it’s popularity, there are some detractors who make good points, but I find Drupal to be flexible, powerful and customizable enough to meet a lot of my web development needs.

While you can put together a very sophisticated online community and/or website with it, you can also use it for pretty simple things. For example, the nptech aggregator at nptech,info uses Drupal’s excellent RSS aggregation functions extensively, and not much else. No blog, no forums. But, having installed and tried standalone RSS aggregators like Gregarius, it became clear that Drupal was just as good an aggregator and, if desired, much, much more. Similarly, when co-workers were looking for a site to share documents with optional commenting (to replace an FTP repository), Drupal was a good choice to support a simple task without locking out growth possibilities.


Installing Drupal can be a three click process or a unix command line nightmare, depending on your circumstances. These days, there are simple options. If you are using a web host, check to see if your site management console is the popular CPanel, and, if so, if it includes the Fantastico utility. Fantastico offers automated installs for many popular open source CMSes, blogs and utilities.

Absent Fantastico, your host might have something similar, or you can download the Drupal source and follow the instructions. Required skills include the ability to modify text files, change file and folder permissions, and create a MySQL database. At a minimum, FTP access to your server, or a good, web-based file manager, will be required.

If you’re installing on your own server, things to be aware of are that you’ll need to have PHP, MySQL and a decent web server, such as Apache installed (these are generally installed by default on Linux, but not on Windows). If you use Linux, consumer-focused Linux variants like Ubuntu and Fedora will have current versions of these applications, properly configured. More robust Linux distributions, like Redhat Enterprise, sometimes suffer from their cautious approach by including software versions that are obsolete. I’m a big fan of Centos, the free version of Red Hat Enterprise, but I’m frustrated that it comes with an older, insecure version of PHP and only very annoying ways to remedy that.

Up and Running

Once installed, Drupal advises you to configure and customize your web site. There are some key decisions to be made, and the success of the configuration process will be better assured if you have a solid idea as to what your web site is going to be used for. With that clearly defined, you can configure the functionality, metadata, site structure, and look and feel of your web site.

  1. Install and enable Modules. Which of the core modules (the ones included in the Drupal pacckage) need to be enabled, and what additional modules are required in order to build your site? This is the first place I go.
  2. Define the site Taxonomy. While you can build a site without a taxonomy, you should only do so for a simple site. A well structured taxonomy helps you make your site navigable; enhances searching; and provides a great tool for pyramid-style content management, with broad topics on one level and the ability to refine and dig deeper intuitively built into the site.
  3. Structure your site with Blocks. You can define blocks, assign them to regions on a page (such as the sidebars or header) and restrict them to certain pages. On the theory that a good web site navigates the user through the site intelligently, based on what they click, the ability to dynamically highlight different content on different pages is one of Drupal’s real strengths.
  4. Theme your web site. Don’t settle for the default themes — there are hundreds (or thousands) to choose from. Go to Drupal Theme Garden and find one that meets your needs, then tweak it. You can do a lot with a good theme and the built in thee design tools, or, if you’re a web developer, you can modify your themes PHP and CSS to create something completely unique. Just be sure that you followed the installation suggestions as to where to store themes and modules so that they won’t get overwritten by an upgrade.

This just brushes the surface, so I’ll do some deeper dives into Drupal configuration over the next few weeks.

Evaluating Wikis

This post originally appeared on the Idealware Blog in August of 2009.

I’m following up on my post suggesting that Wikis should be grabbing a portion of the market from word processors. Wikis are convenient collaborative editing platforms that remove a lot of the legacy awkwardness that traditional editing software brings to writing for the web.  Gone are useless print formatting functions like pagination and margins; huge file sizes; and the need to email around multiple versions of the same document.

There are a lot of use cases for Wikis:

  • We can all thank Wikipedia for bringing the excellent crowd-sourced knowledgebase functionality to broad attention.  Closer to home we can see great use of this at the We Are Media Wiki, where NTEN and friends share best practices around social media and nonprofits.
  • Collaborative authoring is another natural use, illustrated beautifully by the Floss Manuals project.
  • Project Management and Development are regularly handled by Wikis, such as the Fedora Project
  • Wikis make great directories for other media, such as Project Gutenburg‘s catalogue of free E-Books.
  • A growing trend is use of a Wiki as a company Intranet.

Almost any popular Wiki software will support the basic functionality of providing user-editable web pages with some formatting capability and a method (such as “CamelCase“) to signify text that should be a link.  But Wikis have been exploding with additional functionality that ramps up their suitability for all sorts of tasks:

  • The Floss Manuals team wrote extensions for the Open Source TWiki platform that track who is working on which section of a book and send out updates.
  • TWiki, along with Confluence, SocialText and other platforms, include (either natively or via an optional plugin) tabular data — spreadsheet like pages for tracking lists and numeric information. This can really beef up the value of a Wiki as an Intranet or Project Management application.
  • TWiki and others include built-in form generators, allowing you to better track information and interact with Wiki users.
  • And, of course, the more advanced Wikis are building in social networking features.  Most Wikis support RSS, allowing you to subscribe to page revisions. But newer platforms are adding status updates and Twitter-like functionality.
  • Before choosing a Wiki platform, ask yourself some key questions:
  • Do you need granular security? Advanced Wikis have full-blown user and group-based security and authentication features, much like a standard CMS.
  • Should the data be stored in a database? It might be useful or even critical for integration with other systems.
  • Does it belong on a local server, or in the cloud? There are plenty of great hosted Wikis, like PBWorks (formerly PBWiki) and WikiSpaces, in addition to all of the Wikis that you can download and install on your own Server.  There are even personal Wikis like TiddlyWiki and ZuluPad.  I use a Wiki on my Android phone called WikiNotes for my note-keeping.

Are you already using a Wiki?  You might be. Google Docs, with it’s revision history feature, may look more like a Word processor, but it’s a Wiki at heart.

Small Footprints, Robotic and Otherwise

Here’s my 11/7/2008 Idealware post, originally published at http://www.idealware.org/blog/2008/11/small-footprints-robotic-and-otherwise.html

As the proud owner of a T-Mobile G1, the first phone out running Google’s Android Mobile Operating System (OS), I wanted to post a bit about the state of the Mobile OS market.  I’ve been using a smartphone since about 1999, when I picked up a proprietary Sprint phone that could sync with my Outlook Contacts and Calendar.  We’ve come a long way; we have a long way to go before the handheld devices in our pocket overcome the compromises and kludges that govern their functionality.  My personal experience/expertise is with Palm Treos, Windows Mobile, and now Android; but I have enough exposure to Blackberries and the iPhone to speak reasonably about them. My focus is a bit broader than “which is the best phone?”  I’m intrigued by which is the best handheld computing platform, and what does that mean to cash-strapped orgs who are wrestling with what and how they should be investing in them.

I wrote earlier on establishing Smartphone policies in your org.  The short advice there was that the key Smartphone application is email, and you should restrict your users to phones that offer the easiest, most stable integration with your office email system.  That’s still true.  But other considerations include, how compatible are these phones with other business applications, such as Salesforce or our donor database? How easy/difficult are they to use and support? How expensive are they?  What proprietary, marketing concerns on the part of the vendors will impact our use of them?

The big players in the Smartphone OS field are, in somewhat random order:

  • Palm: PalmOS
  • Nokia: Symbian*
  • RIM: Blackberry OS
  • Microsoft: Windows Mobile
  • Apple: iPhone
  • Google: Android

Palm is the granddaddy of Mobile OSes, and it shows.  The interface is functional and there are a lot of apps to support it, but there isn’t much recent development for the platform. Palm has been working on a major, ground -up rewrite for about two years, code-named Nova, but it has yet to come to light, and there’s a serious question now as to whether they’ve taken too long.  Whatever they come up with would have to be pretty compelling to grab the attention of customers and developers in light of Apple and Google’s offerings.

  • App Support: C (lots, but not much new; Treos do Activesync)
  • Ease of Use: C (functional, but not modern interface)
  • Cost: C (Not sure if there’s much more than Palm Treo’s available, $200-200 w/new contract)

Nokia’s Symbian platform is notable for being powerful and open source.  It’s more popular outside of the US, I’m not sure if there are any Symbian smartphones offered directly from US carriers, which makes them pretty expensive.  They do support Activesync, the Microsoft Exchange connector, and have a mature set of applications available.

  • App Support: B (Activesync, lots of apps, but missing some business apps, like Salesforce)
  • Ease of Use: B (Strong interface, great multimedia)
  • Cost: D (Over the roof in US, where contracts don’t subsidize expense).

The Blackberry was the first OS to do push email, and it gained a lot of market and product loyalty as a result.  But, to get there, they put up their own server that subscribes to your email system and then forwards the mail to your phone.  This was great before Microsoft and Google gave us opportunities to set up direct connections to the servers.  Now it’s a kludge, offering more opportunities for things to break.  They do, however, have a solid OS with strong business support – they are either on top or second to Microsoft (with Apple charging up behind them) in terms of number of business apps available for the platform.  So they’re not going anywhere, they’re widely available, and a good choice if email isn’t your primary smartphone application.

  • App Support: A- (lots of everything except Activesync)
  • Ease of Use: B (Solid OS that they keep improving)
  • Cost: B (Range of models at decent prices)

Windows Mobile has broad third party support and powerful administrative functions.  It comes with Activesync, of course.  There are tons of smartphones running it, more than any other OS. But the user interface, in this writer’s opinion (which I know isn’t all that pro-Microsoft, but I swear I’m objective), is miserable.  With Windows Mobile (WinMo) 5, they made a move to emulate the Windows Desktop OS, with a Start Menu and Programs folder.  This requires an excessive amount of work to navigate.  If you use more than the eight apps (or less, depending on model/carrier), you have your work cut out for you to run that ninth app. And the notification system treats every event — no matter how trivial — as something you need to be interrupted for and acknowledge.  It’s hard to imagine how Microsoft is going to compete with this clunker, and you have to wonder how the millions they spend on UI research allowed them to go this route.

  • App Support: A (tons of apps out there)
  • Ease of Use: D (the most clunky mobile OS.  Period.)
  • Cost: A (The variety of phones means you get a range of prices and hardware choices)

Apple’s iPhone represents a leap in UI design that instantly placed it on top of the pack.  Best smartphone ever, right out of the first box.  Apple clearly read the research they commissioned, unlike Microsoft, and thought about how one would interact with a small, restricted device in ways that make it capable and expansive.  The large, sensitive touch screen with multi-touch capabilities rocks.  The web browser is almost as good as the one you use on your desktop (and this is important – web browsers on the four systems above are all very disappointing – only Apple and Google get this right).  The iPhone really shines, of course, as a multimedia device.  It’s a full-fledged iPod and it plays videos as well as a handheld device could.  As a business phone, it’s adequate, not ideal.  While it supports Activesync and has great email and voicemail clients, it lacks a physical keyboard and cut+paste — features that all of their competitors provide (although the keyboard varies by phone model).  So if you do a lot of writing on your phone (as I do), this is a weak point on the iPhone.

  • App Support: A (it’s still pretty new, but development has been fast and furious)
  • Ease of Use: A- (Awesome, actually, except for text processing)
  • Cost: B (since they dropped it to $199).

Android is Google’s volley into the market, and it stands in a class with Apple that is far above the rest of the pack.  The user interface is remarkably functional and geared toward making all of the standard things simple to do, even with one hand.  The desktop is highly customizable, allowing you to put as many of the things you use a touch away.  This phone is in a class with the iPhone, but has made a few design choices that balance the two out.  The iPhone makes better use of the touch screen, with multi-touch features that Google left out.  But the iPhone is has far less customizable an interface.  And, of course, the first Android phone has a full keyboard and (limited) cut and paste.  It is, however, brand new, and I’ll discuss the future below, but right now the third party app market is nascent.  Today, this phone is best suited for early adopters.

  • App Support: C (it will be A in a year or so)
  • Ease of Use: A
  • Cost: A (G1’s are selling for as low as $150w/new plan)

The big question, if you’re investing in a platform, is where are these all going?  Smartphone operating systems are more plentiful and competitive than the desktop variety, where Windows is still the big winner with Apple and the Unix/Linux variants pushing to get in.  But the six systems listed above are all widely deployed.  Palm and Nokia have the least penetration and press these days, but they’re far from knocked out.  Nokia could make a big push to get Symbian into the market and Palm’s Nova could prove to be really compelling — at one point, Palm was king of these devices.  Today, the interesting battle is between the other four, Microsoft, RIM, Apple and Google.  Of these four, all but Android are commercial OSes; Android is fully open source.  RIM and Apple are hardware/software manufacturers, building their own devices and not licensing their OSes to others.  Windows Mobile and Android are available for any hardware manufacturer to deploy.  This suggests two things about the future:

Proprietary hardware/software combos have a tenuous lead.  RIM and Apple are at the top of the market right now.  Clearly, being able to design your OS and hardware in tandem makes for smoother devices and more reliability.  But this edge will wane as hardware standards develop (and they are developing).  At that point, the variety of phones sporting Windows and Google might overwhelm the proprietary vendors.  Apple is big now, but this strategy has always kept them in a niche in the PC market.  They dominate in the MP3 player world, but they got that right and made a killing before anyone could catch up; that edge doesn’t seem to be as strong in the mobile market.

Open Source development won’t be tied to the manufacturer’s profit margin. Android’s status as open source is a wild card (Nokia is Open Source, too, so some of this applies).  Apple and Microsoft have already alienated developers with some of their restrictive policies.  If Android gets wide adoption, which seems likely (Sprint, Motorola, HTC and T-Mobile are all part of Google’s Open Handset alliance, and both AT&T and Verizon are contemplating Android phones), the lack of restrictions on the platform and the Android market (Google’s Android software store, integrated with the OS) could grab a significant percentage of the developer’s market.  I’ve been pleased to see how quickly apps have been appearing in the first few weeks of the G1’s availability.

If I were Microsoft, I’d consider isolating the WinMo development team from the rest of the campus.  Trying to leverage our familiarity with their desktop software has resulted in a really poor UI, but their email/groupware integration is excellent.  They need to dramatically rethink what a smartphone is — it does a lot of the same things that a computer does, but it isn’t a laptop.  Apple should be wondering whether their “develop your app and we’ll decide whether you can distribute it when you’re finished” approach can stand up to the Android threat.  They need to review their restrictive policies.  RIM has to fight for relevance – as customer loyalty, which they built up with their early email superiority fades, well, didn’t you notice that Palm and RIM the only names in our list that don’t have huge additional businesses to leverage?  And we, the smartphone users, need to see whether supporting Android — which has lived up to a lot of its promise, so far — isn’t a better horse for us to run on, because it’s open and extendable without the oversight of any particular vendor.

* I have to own up that I’m least familiar with Symbian; a lot of my analysis is best guess in this case, based on what I do know.

Should Non-profits Seed Software Development?

There were a ton of interesting side topics that came up at the Salesforce Non-Profit Roadmap event, but a few hit on some related themes that have long interested me, and they can be summed in two basic, but meaty questions:

1. Why isn’t there more collaboration between non-profits and open source software developers?

2. Should non-profits seed software development?

You’d think that open source and mission-focused organizations would be a natural fit, given that both share some common ethics around openness, collaboration, sharing and charity, and, let’s face it, both have challenging revenue models that often depend on the charity of others. And I think that’s the rub — simpatico they may be, but non-profts need partners to satisfy their needs, not share them. So when Microsoft, Salesforce, Cisco or some other high-powered tech company throws a significant bone (and these companies are very supportive), they can take it without putting their sustainability at risk. And I like to think that their charity is returned in more ways than the obvious support of our missions. Non-profits can take risks and do some creative things that profit-oriented companies shouldn’t. When it became strikingly clear to me that Salesforce had data management goals way beyond CRM (The evening that Marc Benioff told me that he was very interested in Goodwill’s inventory management challenges), it pretty quickly occurred to me that there would be a mutually beneficial opportunity if Goodwill wanted to pilot some of Salesforce’s development in that new territory.

The Roadmap session was stimulating on a number of levels – if I weren’t about to get extremely busy on my own sustainment pursuits, I could probably blog non-stop on it. One of the fun things was systematically determining exactly how non-profits are different in our software needs from the software-consuming world at large. There are clear needs for fund development, case management, grant reporting/management, and advocacy that aren’t germaine to the standard business world. And the general market for non-profit specific software has some limitations, as I often mention. At Goodwill, I searched high and low for a Workforce Development case management system that sat on an open platform. It doesn’t, to my knowledge, exist – every option out there limits the clients ability to integrate data from and to other systems. Most of them have severely limited reporting capabilities. Ironically, one of the worst offenders is the system that Goodwill International commissioned and sold to the members.

If the time hasn’t come, then it’s about to – non-profits can no longer afford to lock up their data in inflexible systems. Business management is not about silos. Success lies in your ability to learn from the data you collect, and inter-relate data between disparate systems. It’s not about how many clients you served. It’s about the cost of serving each of those clients and the effectiveness of your methods. You need systems that talk to each other and affordable ways to correlate data. So if the existing vendors don’t value this — or, worse, have built their business models on keeping you locked into their platforms by limiting your access to the data — then you need alternatives. And since Microsoft will discount their own software, but won’t fund other vendors, you need to consider if you shouldn’t be putting aside some of your hard-earned donations toward funding that development.

Mapping NP Salesforce

Day one of the Salesforce Roadmap session was a well-crafted, but fairly standard run at typical strategic planning. Hosted by Aspiration’s ever-able Gunner (who I seem to run into everywhere lately), we had a group of about 40 people: five or six from Salesforce/Salesforce Foundation, five to six NP staff, and an assortment of Salesforce consultants. While I’m a consultant these days, I maintain a bit of a staff perspective, as my primary experience with Salesforce was to roll it out for SF Goodwill. The day consisted of breaking up into small teams and hammering out what works for our sector, what doesn’t, what could be done, and building all of this into a set of possible roadmaps that would address non-profit needs. The most striking thing about the outcome was that we had six groups design those roadmaps, and we largely all came up with the exact same things.

So, what are they?

Templates. In 2005, Salesforce developed a template for non-profits that everyone admits was pretty lame. Most of the consultants advised against using it. In 2006, Tucker MacLean, at the time a Fellow with the Foundation, redesigned it into something far more substantial – but still problematic, the problem being that non-profits are far too diverse in their structure and needs to fit a single template. The template in place transforms Salesforce into a donation management application. But I would argue that deploying Salesforce strictly as a fund development tool is short-sighted, and possibly disadvantageous when there are so many choices for software that is developed to that purpose, not twisted to it. The reason to deploy Salesforce is because it can handle the fund development and do so much more.

So, roadmap 1 is to move away from the one-size-fits-all template to something far more modular.

Road map 2 is around the community, or eco-system that supports the non-profit Salesforce adopters. And I think this is where the most meaningful changes can occur. This is about shared development — should NP Salesforce have an Appexchange of its own, one that acts more like Sourceforge? Can the consultant community adopt standards for how we deploy, and can Salesforce support us in any innovative ways? And can best practice, case studies, and non-profit specific training and documentation be collected in one place?

Third was the product itself, which I really don’t think non-profits can or should influence all that heavily. I don’t believe that our platform issues are unique. But we do want to see that new things (document management, Google Apps integration); we would really appreciate a customer portal and stronger ties to CMS’s and web sites, and stronger integration with our external applications.

What interests me is the dual need for this very open, malleable platform and the dire need non-profits have for out of the box functionality. Currently, Salesforce is a very worthwhile investment, but it’s not a light investment for a tech and cash strapped organization. The integrators working with it are frustrated by how much programming they have to do to support some very basic functionality.

But it says worlds that Salesforce is approaching this by inviting the community to advise them. This somewhat techy gathering will be followed up by a survey for the non-profit users at large. Ask yourself, how often does a large, corporate software company ask you directly to give input into their development? Or, if they do, do you think they actually listen? Once again, Salesforce is modeling an approach to doing business that has far more in common with the open source world than the for-profit. More on this later.

Rails Wrap-up

So, I came to this Rails conference looking for a few things. It’s not over, but I think I’ve got a good sense what I’ll walk away with tomorrow.

I started to learn a bit about Rails while considering joining a software start-up (in the non-profit space). I spent a month hammering away with a few O’Reilly books and a sample project, then got pulled away by real world concerns like starting up my new career fast so my family won’t starve. I got far enough to get the concepts and philosophy, master the innovative database management (activerecord), and start an app that I plan to finish and publish as part of Techcafeteria someday. Along the way, I loved the rapid development features and recognized Rails as a bit of a conceptual leap in programming/scripting, that values efficiency of following conventions over coding. Being oriented toward finding the fastest paths to the best results, I was also intrigued by how Rails builds Ajax functionality into the code (I just never bothered to get beyond the basics of Javascript, preferring server-side programming, I bias I now regret…) But I also grew concerned about the platforms speed and scalability, concerns that my friends at Social Source Commons (SSC) would second, I suspect.

So, the four areas that the conference could have helped me with, and how it did:

  1. Learning more of the scripting language. Not so much — maybe a referral to the book I’m missing that will glide me right over that hump.
  2. Ajax intro – pretty good. I attended a few sessions on Prototype and Scriptaculous that gave me a far better handle on how they work .
  3. Ruby Scaling — an awesome session on the proxy cache and other options out there to speed up Rails, with pointers to what bottlenecks it. This was likely the most valuable thing, and I’ll be contacting Gunner to offer to take a look at the SSC platform and see if we can apply some of what I learned.
  4. Where it’s going, as I reported on yesterday. Among web scripting languages, PHP and ASP/.NET are the kings today. My prediction is that Ruby on Rails will eclipse them, and gain broad adoption among web 2.0 developers and corporations looking for in-house app development tools. The main limitation – performance – is being addressed and will be fixed, no question.

The benefit of having a functional application roughly 60 seconds after you think of a name for it is phenomenal, and the developers are completely geared toward continuing to make it the out of the box solution for speedy delivery of standards-based, current tech web applications.

Instant Open API with Rails 2.0

Day 2 at the Ruby on Rails conference – after the Keynote.

My main focus is on technology trends that allow us all to make better use of the vast amounts of information that we store in myriad locations and formats across diverse systems. The new standards for database manipulation (SQL); data interchange (XML) and data delivery (RSS) are huge developments in an industry that has traditionally offered hundreds different ways of managing, exporting and delivering data, none of which worked particularly well — if at all — with anybody else’s method. The technology industry has tried to address this with one size fits all options — Oracle, SAP, etc., offering Enterprise Resource Platforms that should be all things to all people. But these are expensive options that require a stable of high-paid programmers on hand to develop. I strongly advocate that we don’t need to have all of our software on one platform, but that all data management systems have to support standardized methods of exchanging information. I boil it all down to this:

It’s your data. Data systems should not restrict you from doing what you want to do with your data, and they should offer powerful and easy methods of accessing the data. You can google the world for free. You shouldn’t have to pay to access your own donor information in meaningful ways.

How can the software developers do this? By including open Application Programming Interfaces (APIs) that support web standards.

So what does this have to do with Ruby on Rails? At the Keynote this morning, David Heinemeier Hannson showed us the improvements coming up in Ruby for Rails 2.0. And he started with a real world example: an address book. Bear with me.

  1. He created the project (one line entered at a command prompt).
  2. He created the database (another line)
  3. He used Rails’ scaffolding feature to create some preliminary HTML and code for working with his address book (one more line).
  4. He added a couple of people to the address book.

At this point, with a line or so of code, he was able to produce HTML, XML, RSS and CSV outputs of his data. The new scaffolding in 2.0 automatically builds the API. I could get a lot more geeky about the myriad ways that Ruby on Rails basically insures that your application will be, out of the box, open, but I think that says it well.

Think of what this means to the average small business or non-profit:

  • You need a database to track, say, web site members, and you want to further integrate that with your CRM system. With rails, you can, very quickly, create a database; generate (via scaffolding) the input forms; easily export all data to CSV or XML, either of which can be imported into a decent CRM.
  • You want to offer newsfeeds on your web site. Create the simple database in Rails. Generate the basic input forms. Give access to the forms to the news editors. Export the news to RSS files on your web server.

This is powerful stuff, and, as I said, an instant API, meaning that it can meet all sorts of data management needs, and even act as an intermediary between incompatible systems. I still have some reservations about Rails as a full-fledged application-development environment, mostly because it’s performance is slow, and, while the keynote mentioned some things that will address speed in 2.0, notably a smart method of combing and compressing CSS and Javascript code, I didn’t hear anything that dramatically addresses that problem. But, as a platform, it’s great to see how it makes actively including data management standards a native output of any project, as opposed to something that the developer must decide whether or not to do. And, as a tool, it might have a real home as a mediator in our data integration disputes.