September 25

The End Of NPTech (.INFO)

After eight years, I’ve decided to shutter the nptech.info website, which will also disable the @nptechinfo twitter feed that was derived from it.  Obviously, Twitter, Facebook and Google Plus have made RSS aggregation sites like nptech.info obsolete. Further, as Google ranks links from aggregators lower and lower on the optimization scale, it seems like I might be doing more harm than good by aggregating all of the nptech blogs there. It will be better for all if I spend my efforts promoting good posts on social media, rather than automatically populating a ghost town.

Long-time Techcafeterians will recall that NPTECH.INFO used to be a pretty cool thing. The history is as follows:

Around 2004, when RSS first started getting adopted on the web, a very cool site called Del.icio.us popped up.  Delicious was a social bookmarking site, where you could save links with keywords and descriptions, and your friends could see what you were sharing (as well as the rest of the delicious userbase). Smart people like Marnie Webb and Marshall Kirkpatrick agreed that they would tag articles of interest to their peers with the label “nptech”. Hence, the origin of the term. They let about 50 friends know and they all fired up their newsreaders (I believe that Bloglines was state of the art back then — Google Reader was just a glimmer in some 20%er’s eye).

Understand, referring information by keyword (#hashtag) is what we are all doing all of the time now.  But in 2005, it was a new idea, and Marnie’s group were among the first to see the potential.

I picked up on this trend in 2005.  At lunch one day, Marnie and I agreed that a web site was the next step for our experiment in information referral.  So I installed Drupal and registered the domain and have kept it running (which takes minimal effort) ever since.  It got pretty useless by about 2009, but around that time I started feeding the links to the @nptechinfo Twitter account, and it had a following as well.

Yesterday, I received an email asking me to take down an article that included a link to a web site.  It was an odd request — seemed like a very 2001, what is this world wide web thing? request: “You don’t have permission to link to our site”.  Further digging revealed that these were far from net neophytes; they were SEO experts who understood that a click on the link from my aggregator was being misinterpreted by Google as a potential type of link fraud, thus impairing their SEO.  I instantly realized that this could be negatively impacting all of my sources –and most of my sources are my friends in the nptech community.

There is probably some way that I could counter the Google assumption about the aggregator.  But there are less than three visitors a day, on average. So, nptech.info is gone, but the community referring nptech information is gigantic and global.  It’s no longer an experiment, it’s a movement.  And it will long outlive its origins.

February 23

NPO Evaluation, IE6, Still Waters for Wave

[Oops! Forgot to publish this Idealware post from late January...]

Here are a few updates topics I’ve posted on in the last few months:

Nonprofit Assessment

The announcement that GuideStar, Charity Navigator and others would be moving away from the 990 form as their primary source for assessing nonprofit performance raised a lot of interesting questions, such as “How will assessments of outcomes be standardized in a way that is not too subjective?” and “What will be required of nonprofits in order to make those assessments?” We’ll have a chance to get some preliminary answers to those questions on February 4th, when NTEN will sponsor a phone-in panel discussion with representatives of GuideStar and Charity Navigator, as well as members of the nonprofit community. The panel will be hosted by Sean Stannard-Stockton of Tactical Philanthropy, and will include:

I’ll be participating as well. You can learn more and register for the free event with NTEN.

The Half-Life of Internet Explorer 6

It’s been quite a few weeks as far as headlines go, with a humanitarian crisis in haiti; a dramatic election in Massachusetts; A trial to determine if California gay marriage-banning proposition is, in fact, discriminatory; high profile shakeups in late night television and word of the Snuggie, version 2 all competing for our attention. An additional, fascinating story is unfolding with Google’s announcement that they might pull their business out of China in light of a massive cybercrime against critics of the Chinese regime that, from all appearances, was either performed or sanctioned by the Chinese government. There’s been a lot of speculation about Google’s motives for such a dramatic move, and I fall in the camp that says, whatever their motives, it’s refreshing to see a gigantic U.S. corporation factor ethics into a business decision, even if it’s unclear exactly what the complete motivations are.

As my colleague Steve Backman fully explains here, here’s been some fallout from this story for Microsoft. First, like Google and Yahoo!, Microsoft operates a search engine in China and submits to the Chinese governments censoring filters. They’ve kept mum on their feelings about the cyber-attack. Google’s analysis of that attack reveals that GMail accounts were hacked and other breaches occurred via security holes in Internet Explorer, versions six and up, that allow a hacker to upload programs and take control of a user’s PC. As this information came to light, France and Germany both issued advisories to their citizens that switching to a browser other than Internet Explorer would be prudent. In response, Microsoft has issued a statement recommending that everyone upgrade from Internet Explorer version 6 to version 8, the current release. What Microsoft doesn’t mention is that the security flaw exists in versions seven and eight as well as six, so upgrading won’t protect you from the threat, although they just released a patch that hopefully will.

So, while their reasoning is suspect, it’s nice to see that Microsoft has finally joined the campaign to remove this old, insecure and incompatible with web standards browser.

Google Wave: Still Waters

I have kept Google Wave open in a tab in my browser since the day my account was opened, subscribed to about 15 waves, some of them quite well populated. I haven’t seen an update to any of these waves since January 12th, and it was really only one wave that’s gotten any updates at all in the past month. I can’t give away the invites I have to offer. The conclusion I’m drawing is that, if Google doesn’t do something to make the Wave experience more compelling, it’s going to go the way of a Simply Red B-Side and fade from memory. As I’ve said, there is real potential here for something that puts telecommunication, document creation and data mining on a converged platform, and that would be new. But, in it’s current state, it’s a difficult to use substitute for a sophisticated Wiki. And, while Google was hyping this, Confluence released a new version of their excellent (free for nonprofits) enterprise Wiki that can incorporate (like Wave) Google gadgets. That makes me want to pack up my surfboard.

Category: idealware, nptech, Open APIs, strategy, techcafeteria | Comments Off
February 21

Why Google Buzz Should Be Your Blog

Buzzcafeteria
Now, you might think that’s a crazy idea, but  I think Buzz is about 80% of the way there. Last week, in my Google’s Creepy Profiles post, I made a suggestion (that someone at Google has hopefully already thought of) that it wouldn’t take much to turn a Profile into a full-fledged biography/lifestreaming site.  Just add some user-configurable tabs, that can contain HTML or RSS-fed content, and add some capability to customize the style of the profile.  Since I wrote that, I’ve been using Buzz quite a bit and I’ve really been appreciating the potential it has to deepen conversations around web-published materials.

I think some of my appreciation for Buzz comes from frustration with Google’s previous, half-hearted attempts to make Google Reader more social. If you use Reader heavily, then you know that you can share items via a custom, personal page and the “People You Follow” tab in Reader. You also know that you can comment on items and read others comments in the “Comments View”.  But it’s far from convenient to work with either of these sharing methods.  But, once you link your reader shared items to Buzz, then you aren’t using Reader’s awkward ionterface to communicate; you’re using Buzzes.  And Buzz, for all of Google’s launch-time snafus, is an easy to use and powerful communications tool, merging some of the best things about Twitter and Facebook.

So, how is Buzz suitable for a blog?

  • It’s a rich editing environment with simple textile formatting and media embedding, just like a blog.
  • Commenting — way built-in.
  • RSS-capable – you can subscribe to anyone’s Buzz feed.
  • Your Google Profile makes for a decent public Blog homepage, with an “About the Author”, links and contact pages.
  • It’s pre-formatted for mobile viewing

What’s missing?

  • Better formatting options.  The textile commands available are minimal
  • XML-RPC remote publishing
  • Plug-ins for the Google Homepage
  • As mentioned, more customization and site-building tools for the Google Homepage.

Why is it compelling?

  • Because your blog posts are directly inserted into a social networking platform.  No need to post a link to it, hope people will follow, and then deal with whatever commenting system your blog has to respond.
  • Your blog’s community grows easily, again fueled by the integrated social network.
  • Managing comments – no longer a chore!

This is the inverse of adding Google or Facebook’s Friend Connect features to your blog.  it’s adding your blog to a social network, with far deeper integration that Twitter and Facebook currently provide. Once Google releases the promised API, much of what’s missing will start to become available.  At that point, I’ll have to think about whether I want to move this island of a blog to the mainland, where it will get a lot more traffic.  I’ll definitely be evaluating that possibility.

Category: nptech, Open APIs, rss, strategy, techcafeteria, twitter | Comments Off
December 23

Get Ready For A Sea Change In Nonprofit Assessment Metrics

watchdogs.png

Last week, GuideStar, Charity Navigator, and three other nonprofit assessment and reporting organizations made a huge announcement: the metrics that they track are about to change.  Instead of scoring organizations on an “overhead bad!” scale, they will scrap the traditional metrics and replace them with ones that measure an organization’s effectiveness.

  • The new metrics will assess:
  • Financial health and sustainability;
  • Accountability, governance and transparency; and
  • Outcomes.

This is very good news. That overhead metric has hamstrung serious efforts to do bold things and have higher impact. An assessment that is based solely on annualized budgetary efficiency precludes many options to make long-term investments in major strategies.  For most nonprofits, taking a year to staff up and prepare for a major initiative would generate a poor Charity Navigator score. A poor score that is prominently displayed to potential donors.

Assuming that these new metrics will be more tolerant of varying operational approaches and philosophies, justified by the outcomes, this will give organizations a chance to be recognized for their work, as opposed to their cost-cutting talents.  But it puts a burden on those same organizations to effectively represent that work.  I’ve blogged before (and will blog again) on our need to improve our outcome reporting and benchmark with our peers.  Now, there’s a very real danger that neglecting to represent your success stories with proper data will threaten your ability to muster financial support.  You don’t want to be great at what you do, but have no way to show it.

More to the point, the metrics that value social organizational effectiveness need to be developed by a broad community, not a small group or segment of that community. The move by Charity Navigator and their peers is bold, but it’s also complicated.  Nonprofit effectiveness is a subjective thing. When I worked for a workforce development agency, we had big questions about whether our mission was served by placing a client in a job, or if that wasn’t an outcome as much as an output, and the real metric was tied to the individual’s long-term sustainability and recovery from the conditions that had put them in poverty.

Certainly, a donor, a watchdog, a funder a, nonprofit executive and a nonprofit client are all going to value the work of a nonprofit differently. Whose interests will be represented in these valuations?

So here’s what’s clear to me:

– Developing standardized metrics, with broad input from the entire community, will benefit everyone.

– Determining what those metrics are and should be will require improvements in data management and reporting systems. It’s a bit of a chicken and egg problem, as collecting the data wis a precedent to determining how to assess it, but standardizing the data will assist in developing the data systems.

– We have to share our outcomes and compare them in order to develop actual standards.  And there are real opportunities available to us if we do compare our methodologies and results.

This isn’t easy. This will require that NPO’s who have have never had the wherewith-all to invest in technology systems to assess performance do so.  But, I maintain, if the world is going to start rating your effectiveness on more than the 990, that’s a threat that you need to turn into an opportunity.  You can’t afford not to.

And I look to my nptech community, including Idealware, NTEN, Techsoup, Aspiration and many others — the associations, formal, informal, incorporated or not, who advocate for and support technology in the nonprofit sector — to lead this effort.  We have the data systems expertise and the aligned missions to lead the project of defining shared outcome metrics.  We’re looking into having initial sessions on this topic at the 2010 Nonprofit Technology Conference.

As the world starts holding nonprofits up to higher standards, we need a common language that describes those standards.  It hasn’t been written yet.  Without it, we’ll escape the limited, Form 990 assessments to something that might equally fail to reflect our best efforts and outcomes.

Category: idealware, Management, Open APIs, strategy, techcafeteria | Comments Off
November 18

Why Geeks (like Me) Promote Transparency

Mizukurage.jpg
Public Domain image by Takada

Last week, I shared a lengthy piece that could be summed up as:

“in a world where everyone can broadcast anything, there is no privacy, so transparency is your best defense.”

(Mind you, we’d be dropping a number of nuanced points to do that!)

Transparency, it turns out, has been a bit of a meme in nonprofit blogging circles lately. I was particularly excited by this post by Marnie Webb, one of the many CEO’s at the uber-resource provider and support organization Techsoup Global.

Marnie makes a series of points:

Meaningful shared data, like the Miles Per Gallon ratings on new car stickers or the calorie counts on food packaging help us make better choices;

But not all data is as easy to interpret;

Nonprofits have continually been challenged to quantify the conditions that their missions address;

Shared knowledge and metrics will facilitate far better dialog and solutions than our individual efforts have;

The web is a great vehicle for sharing, analyzing and reporting on data;

Therefore, the nonprofit sector should start defining and adopting common data formats that support shared analysis and reporting.

I’ve made the case before for shared outcomes reporting, which is a big piece of this. Sharing and transparency aren’t traditional approaches to our work. Historically, we’ve siloed our efforts, even to the point where membership-based organizations are guarded about sharing with other members.

The reason that technologists like Marnie and I end up jumping on this bandwagon is that the tech industry has modeled the disfunction of a siloed approach better than most. early computing was an exercise in cognitive dissonance. If you regularly used Lotus 123, Wordperfect and dBase (three of the most popular business applications circa 1989) on your MS-DOS PC, then hitting “/“, F7 or “.” were the things you needed to know in order to close those applications respectively. For most of my career, I stuck with PCs for home use because I needed compatibility with work, and the Mac operating system, prior to OSX, just couldn’t easily provide that.

The tech industry has slowly and painfully progressed towards a model that competes on the sales and services level, but cooperates on the platform side. Applications, across manufacturers and computing platforms, function with similar menus and command sequences. Data formats are more commonly shared. Options are available for saving in popular, often competitive formats (as in Word’s “Save As” offering Wordperfect and Lotus formats). The underlying protocols that fuel modern operating systems and applications are far more standardized. Windows, Linux and MacOS all use the same technologies to manage users and directories, network systems and communicate with the world. Microsoft, Google, Apple and others in the software world are embracing open standards and interoperability. This makes me, the customer, much less of an innocent bystander who is constantly sniped by their competitive strategies.

So how does this translate to our social service, advocacy and educational organizations? Far too often, we frame cooperation as the antithesis to competition. That’s a common, but crippling mistake. The two can and do coexist in almost every corner of our lives. We need to adopt a “rising tide” philosophy that values the work that we can all do together over the work that we do alone, and have some faith that the sustainable model is an open, collaborative one. Looking at each opportunity to collaborate from the perspective of how it will enhance our ability to accomplish our public-serving goals. And trusting that this won’t result in the similarly-focused NGO down the street siphoning off our grants or constituents.

As Marnie is proposing, we need to start discussing and developing data standards that will enable us to interoperate on the level where we can articulate and quantify the needs that our mission-focused organizations address. By jointly assessing and learning from the wealth of information that we, as a community of practice collect, we can be far more effective. We need to use that data to determine our key strategies and best practices. And we have to understand that, as long as we’re treating information as competitive data; as long as we’re keeping it close to our vests and looking at our peers as strictly competitors, the fallout of this cold war is landing on the people that we’re trying to serve. We owe it to them to be better stewards of the information that lifts them out of their disadvantaged conditions.

Category: idealware, Management, nptech, Open APIs, strategy, techcafeteria | Comments Off
September 16

Swept Up in a Google Wave

mailbox.jpg
Photo by Mrjoro.

Last week, I shared my impressions of Google Wave, which takes current web 2.0/Internet staple technologies like email, messaging, document collaboration, widgets/gadgets and extranets and mashes them up into an open communications standard that, if it lives up to Google’s aspirations, will supersede email.  There is little doubt in my mind that this is how the web will evolve.  We’ve gone from:

  • The Yahoo! Directory model – a bunch of static web sites that can be cataloged and explored like chapters in a book, to
  • The Google needle/haystack approach – the web as a repository of data that can be mined with a proper query, to
  • Web 2.0, a referral-based model that mixes human opinion and interaction into the navigation system.

For many of us, we no longer browse, and we search less than we used to, because the data that we’re looking for is either coming to us through readers and portals where we subscribe to it, or it’s being referred to us by our friends and co-workers on social networks.  Much of what we refer to each other is content that we have created. The web is as much an application as it is a library now.

Google Wave might well be “Web 3.0“, the step that breaks down the location-based structure of web data and replaces it completely with a social structure.  Data isn’t stored as much as it is shared.  You don’t browse to sites; you share, enhance, append, create and communicate about web content in individual waves.  Servers are sources, not destinations in the new paradigm.

Looking at Wave in light of Google’s mission and strategy supports this idea. Google wants to catalog, and make accessible, all of the world’s information. Wave has a data mining and reporting feature called “robots”. Robots are database agents that lurk in a wave, monitoring all activity, and then pop in as warranted when certain terms or actions trigger their response.  The example I saw was of a nurse reporting in the wave that they’re going to give patient “John Doe” a peanut butter sandwich.  The robot has access to Doe’s medical record, is aware of a peanut allergy, and pops in with a warning. Powerful stuff! But the underlying data source for Joe’s medical record was Google Health. For many, health information is too valuable and easily abused to be trusted to Google, Yahoo!, or any online provider. The Wave security module that I saw hid some data from Wave participants, but was based upon the time that the person joined the Wave, not ongoing record level permissions.

This doesn’t invalidate the use of Wave, by any means — a wave that is housed on the Doctor’s office server, and restricted to Doctor, Nurse and patient could enable those benefits securely. But as the easily recognizable lines between cloud computing and private applications; email and online community; shared documents and public records continue to blur, we need to be careful, and make sure that the learning curve that accompanies these web evolutions is tended to. After all, the worst public/private mistakes on the internet have generally involved someone “replying to all” when they didn’t mean to. If it’s that easy to forget who you’re talking to in an email, how are we going to consciously track what we’re revealing to whom in a wave, particularly when that wave has automatons popping data into the conversation as well?

The Wave as internet evolution idea supports a favored notion: data wants to be free. Open data advocates (like myself) are looking for interfaces that enable that access, and Wave’s combination of creation and communication, facilitated by simple, but powerful data mining agents, is a powerful frontend.  If it truly winds up as easy as email, which is, after all, the application that enticed our grandparents to sue the net, then it has culture-changing potential.  It will need to bring the users along for that ride, though, and it will be interesting to see how that goes.

——–

A few more interesting Google Wave stories popped up while I was drafting this one. Mashable’s Google Wave: 5 Ways It Could Change the Web gives some concrete examples to some of the ideas I floated last week; and, for those of you lucky enough to have access to Wave, here’s a tutorial on how to build a robot.

Beta Google Wave accounts can be requested at the Wave website.  They will be handing out a lot more of them at the end of September, and they are taking requests to add them to any Google Domains (although the timeframe for granting the requests is still a long one).

Category: email, idealware, nptech, Open APIs, strategy, techcafeteria | Comments Off
September 8

Is Google Wave a Tidal Wave?

800px-Hokusai21_great-wave.jpg
“The Great Wave off Kanagawa” by Katsushika Hokusai (1760-1849).

Google is on a fishing expedition to see if we’re willing to take web-surfing to a whole new level.  My colleague Steve Backman introduced us to Google Wave a few months ago. I attended a developer’s preview at Techsoup Headquarters last week, and I have some additional thoughts to share.

Google’s introduction of Wave is nothing if not ambitious.  As opposed to saying “We have a new web mashup tool” or “We’ve taken multimedia email to a new level”, they’re pitching Wave as nothing less than the successor to email.  My question, after seeing the demo, is “Is that an outrageous claim, or a way too modest one?”.

The early version of Google Wave I saw looked a lot like Gmail, with a folder list on the left and “wave” list next to it. Unlike Gmail, a third pane to the right included an area where you can compose waves, so Wave is three-columner to Gmail’s two.

A wave is a collaborative document that can be updated by numerous people in real-time.  This means that, if we’re both working in the same wave, you can see what I’m typing, letter by letter, as I can see what you add. This makes Twitter seem like the new snail mail. It’s a pretty powerful step for collaborative technology. But it’s also quite a cultural change for those of us who appreciate computer-based communications for the incorporated spell-check and the ability to edit and finalize drafted messages before we send them.

Waves can include text, photos, film clips, forms, and any active content that could go into a Google Gadget. If you check out iGoogle, Google’s personal portal page, you can see the wide assortment of gadgets that are available and imagine how you would use them — or things like them — in a collaborative document. News feeds, polls, games, utilities, and the list goes on.

You share waves with any other wave users that you choose to share with.  User-level security is being written into the platform, so that you can share waves as read-only or only share certain content in waves with particular people.

Given these two tidbits, it occurred to me that each wave was far more like a little Extranet than an email message. This is why I think Google’s being kind of coy when they call it an email killer – it’s a Sharepoint killer.  It’s possibly a Drupal (or fill in your favorite CMS here) killer.  It’s certainly an evolution of Google Apps, with pretty much all of that functionality rolled into a model that, instead of saying “I have a document, spreadsheet or website to share” says “I want to share, and, once we’re sharing, we can share websites, spreadsheets, documents and whatever”.  Put another way, Google Apps is an information management tool with some collaborative and communication features.  Google Wave is a communications platform with a rich set of information management tools. It’s Google Docs inverted.

So, Google Wave has the potential to be very disruptive technology, as long as people:

  • Adopt it;
  • Feel comfortable with it; and
  • Trust Google.

Next week, I’ll spend a little time on the gotcha’s – please add your thoughts and concerns in the comments.

Category: email, idealware, nptech, Open APIs, strategy, techcafeteria | Comments Off
March 10

Both Sides Now

Say you sign up for some great Web 2.0 service that allows you to bookmark web sites, annotate them, categorize them and share them. And, over a period of two or three years, you amass about 1500 links on the site with great details, cross-referencing — about a thesis paper’s worth of work. Then, one day, you log on to find the web site unavailable. News trickles out that they had a server crash. Finally, a painfully honest blog post by the site’s founder makes clear that the server crashed, the data was lost, and there were no backups. So much for your thesis, huh? Is the lesson, then, that the cloud is no place to store your work?

Well, consider this. Say you start up a Web 2.0 business that allows people to bookmark, share, categorize and annotate links on your site. And, over the years, you amass thousands of users, some solid funding, advertising revenue — things are great. Then, one day, the server crashes. You’re a talented programmer and designer, but system administration just wasn’t your strong suit. So you write a painful blog entry, letting your users know the extent of the disaster, and that the lesson you’ve learned is that you should have put your servers in the cloud.

My recent posts have advocated cloud computing, be it using web-based services like Gmail, or looking for infrastructure outsourcers who will provide you with virtualized desktops. And I’ve gotten some healthily skeptical comments, as cloud computing is new, and not without it’s risks, as made plain by the true story of the Magnolia bookmarking application, which recently went down in the flames as described above. The lessons that I walk away with from Magnolia’s experience are:

  • You can run your own servers or outsource them, but you need assurances that they are properly maintained, backed up and supported. Cloud computing can be far more secure and affordable than local servers. But “the cloud”, in this case, should be a company with established technical resources, not some three person operation in a small office. Don’t be shy about requesting staffing information, resumes, and details about any potential off-site vendor’s infrastructure.
  • You need local backups, no matter where your actual infrastructure lives. If you use Salesforce or Google, export your data nightly to a local data store in a usable format. Salesforce lets you export to Excel; Google supports numerous formats. Gmail now supports an Offline mode that stores your mail on the computer you access it from. If you go with a vendor who provides virtual desktop access (as I recommend here), get regular snapshots of the virtual machines. If this isn’t an over the air transfer, make sure that your vendors will provide DVDs of your data or other suitable medium.
  • Don’t sign any contract that doesn’t give you full control over how you can access and manipulate your data, again, regardless of where that data resides. A lot of vendors try and protect themselves by adding contract language prohibiting mass updates and user access, even on locally-installed applications. But their need to simplify support should not be at the expense of you not having complete control over how you use your information.
  • Focus on the data. Don’t bend on these requirements: Your data is fully accessible; It’s robustly backed up; and, in the case of any disaster, it’s recoverable.

Technology is a set of tools used to manage your critical information. Where that technology is housed is more of a feature set and financial choice than anything else. The most convenient and affordable place for your data to reside might well be in the cloud, but make sure that it’s the type of cloud that your data won’t fall through.

August 27

Ubiquitious Blogging

Mozilla.org just released one of the most exciting Firefox add-ons to come down the pike – Ubiquity. This is very alpha – the user interface will definitely mature, so what’s there now is best suited for geeks like me who have always liked command shells and already do things like use the Mac’s Spotlight as their calculator (if you type 2 + 2 in Spotlight, it will tell you it equals 4).

Ubiquity is best described as a macro language for the web, or a personal mashup engine. You assign a hotkey (such as Alt-space or Option-space) and a box comes up, which you can enter ubiquity commands in. I’m not going to tell you all about them – just watch the video:

Ubiquity for Firefox from Aza Raskin on Vimeo.

At this point, Ubiquity’s functionality pretty much requires a Google account – the email, calendar, maps and contacts integration is all with Google’s offerings. I expect that to change rapidly, as developing custom commands for Ubiquity is at a very basic programming level.

The case uses that are immediately apparent include adding maps and multimedia content to emails and blog entries (I use Scribefire – this assumption assumes that you compose your blog in your browser); having a lot of info available without having to tab away from the web page you’re on; and making some complex web tasks far more efficient. Mozilla is ambitious, though – they see Ubiquity as the ultimate personal web assistant, that will someday let you issue a command to book a trip; issue another to set up a multi-party meeting, and, who knows? Vacuum the house and feed the fish. Aza discusses that vision here.

Try Ubiquity out. Install it from here. Let me know what you think, and what case uses you envision for it.

Category: nptech, Open APIs, strategy, techcafeteria | Comments Off
November 17

Shlock and Oh! Facebook’s social dysfunction

I am not a luddite. In fact, I’m a big advocate of most of the concepts of social networking, and a long-time participant. But, about a month ago, A persistent friend roped me into joining Facebook, which, as you no doubt realize, is about the trendiest web site on Earth right now, basking in more than it’s fair share of memespace. Man, am I hating it.

Facebook is decidedly social. You fill out your profile, connect to your friends, and, from that point on, every time that you or a friend do anything on Facebook, the rest of your community knows about it, as a constantly updating scroll of alerts keeps you up to date. I know that Scott won a Disney trivia quiz, that Holly is now friends with Heather, and that Michelle has been experimenting with Trac, my favorite source code repository software. That’s a lot more info than LinkedIn tells me about my associates when I log on there. I also know, or have good reason to suspect, that a co-worker of mine broke up with his partner recently, because he updated his profile to note that he’s single. That was more info than I really wanted to know…

Most of what can be done on Facebook involves using the custom apps that programmers and pseudo-programmers (like me) can easily develop for the platform. The problem is that the majority of these apps are astoundingly trite in nature. There are hundreds of apps to let you poke your friends and compare your pop culture acumens. But there’s little of substance. I know that what drew the bulk of my friends to this platform was the promise of using it as a mission-marketing and fundraising tool for our non-profit orgs. There are plenty of apps that support that, but I’m pained to see where this is a very effective tool for it, unless donating to something meaningful makes people feel a bit better about themselves after six or seven hours of online tickling, poking, and otherwise engaging in remarkably trivial pursuits.

Social networking takes a lot of forms on the net, from the little “people who bought this also bought that” notes on amazon to the web-based communities around games and mobile devices to the whole hog social networks. The latest educated speculation is that Google and Yahoo will start adding social networking features to their email platforms, and Firefox 3 will act as an aggregator, pulling data from multiple social sites into the browser interface. If nothing else, this tells me that I can choose to join Facebook or Myspace today, but next year the challenge will be opting out.

Slam the blogosphere if you want, but the social interaction there starts with someone writing something they care about. And if you read a blog entry that speaks to you, you can engage in a focused conversation via the comments. Or, as I’ve done a few times in the past, roundtable discussion among related blogs. Something about the trivial level of automated discourse on Facebook almost knocks out the potential for meaningful interchanges, and when something more real pops up — like someone changing their profile to reflect a very real change in their life and who they are — it’s awkward to see it scroll up, sandwiched between the latest flixter movie showdown and the news that some friend of yours is bored with their commute. This almost moves the level of discourse between my friends and myself about three steps closer to spam. The Facebook brand of social networking is far too dominated by the fact that, even for an internet junkie like me, the majority of things that I can do on Facebook are not that interesting, meaningful or real.

Category: Miscellany, nptech, Open APIs, techcafeteria | Comments Off