Tag Archives: open data

Making Your Website More Useful For More People

This post was originally published on the LSC Technology Blog in January of 2014. LSC is Legal Services Corporation, my employer.

At LSC, we’ve been taking a critical look at our web site, to see if we can make it a more useful web site by factoring in all of the ways that people might want to view or use our information. In these days of big data and small screens, we realize that we have to be much more attentive to the ways that we present data than we have in the past.

Identifying the different visitors who frequently use our site, we took a closer look at their needs, and how we could improve our delivery of information to them. For example, visitors to LSC’s web site could be:

  • reporters or Hill staffers looking for a quick cut and paste of data on the site that is hard to get out of a linked PDF;
  • general public looking for data to pull into a spreadsheet, who would also be disappointed to find that data in a PDF;
  • visually or physically impaired, and therefore not able to view web content that isn’t compliant with the standards that their specialized software requires;
  • accessing the site on a mobile device that doesn’t display flash or video and has no capability to display a PDF

The PDF Problem

Adobe has done great things with the Portable Document Format, opening it up as a public standard and continually improving the functionality of the format. But this is not an optimal format for web-based content, because PDFs require additional software in order to be viewed, and they need to be created with a solid understanding of how PDFs need to be prepared, so that they are compatible with accessibility standards. Our goal is to ensure content is delivered optimally, and in a format that makes it easy to access for anyone and everyone visiting our site.

In the past, we’ve relied heavily on publishing web content via PDF, and we now have a backlog of documents that aren’t as widely usable as we would like. Our plan is to immediately make two changes:

  1. Use PDF sparingly and thoughtfully as we move forward. Use PDFs as optional downloads for content that is also displayed in HTML, or as appropriate downloads for white papers and legal reports that aren’t the types of things that users will want to quote or edit; design PDFs that are compatible with the section 508 standards for web accessibility.
  1. Determine which of our existing PDFs need to be republished in more accessible formats and convert them. We don’t have the resources to fix everything, but we have good statistical data from Google Analytics to tell us which PDFs our visitors look at and a good idea how to prioritize this content.

Open Web

As a nonprofit that allocates federal funds, we have a responsibility to make data available to the public. But a commitment to open data means more than just making the data available; it needs to be available in formats that people can easily use. Data stored in an HTML table can be copied and pasted into Excel. Data in PDF and image formats can’t be, at least, not easily. As David Ottewell recently tweeted, a PDF of a spreadsheet is not a spreadsheet. These efforts dovetail with our broader efforts to make data available in manipulatable formats.

Wild, Wild Web

It is also important that our web site deliver the same user experience on smartphones or a tablets as it would when viewed on desktop or laptop browsers. This wasn’t high on our radar in 2011, when we redesigned our website in the Drupal content management system. At the time, we developed a mobile site as a separate, fractional copy of our main site.

Looking ahead

A  modest revamp of LSC.GOV is planned for second half of 2014 to improve the site navigation and responsiveness on multiple devices (e.g. one site that alters it’s navigational elements and appearance to properly utilize the screen that it’s displayed on). We also won’t forget the visitors that don’t have smart phones and how best to make information available to them.

Having a website that anticipates their diverse needs of our online visitors is our goal. What’s yours? What are your current challenges?

Hacking For Justice

This post was originally published on the LSC Technology Blog in May of 2013. Note that “LSC” is Legal services Corporation, my current employer, and “TIG” stands for “Technology Initiative Grants”.

Welcome to the new LSC Technology blog, hosted here on the TIG site, and written by TIG and Information Technology staff. To kick this off, I wanted to report on a fun, exciting, and long overdue initiative we’re on: making our non-confidential data available to hackers.  Let me be clear here, for those of you who have any bad associations with the word, that  a “hacker” is not a computer criminal or spy.  The term has been misused to connote such things, but the original and current definition of a hacker is simply someone who likes to take things apart and rebuild them better, or take things apart and make new things out of them. Most recently, hacking and hackers have been tied to the community of civic-minded web application developers who want to take publicly available data and make it accessible and relevant to their communities. And that’s the group of hackers that we’re discussing.

Hackers hold Hackathons, extended sessions where hackers get together to collaborate on projects. At the first LSC Tech Summit, United States Chief Technology Officer Todd Park addressed the group and urged us to model the behavior of the Department of Health and Human Services by holding hackathons and letting developers build the rich demographic applications that tell our story.

June 1st is the National Day of Civic Hacking.  Across the United States, “Hackathons” will be held in cities of towns, and the attendees will show up with their laptops, connect to the wifi, and create map mashups using tools like Google Maps and a collection of public data sets. The About section of the website describes it like this:

“The event will bring together citizens, software developers, and entrepreneurs from all over the nation to collaboratively create, build, and invent new solutions using publicly-released data, code and technology to solve challenges relevant to our neighborhoods, our cities, our states and our country.”

We’re busy analyzing our data sets, many of which are already available via our web site, but not in the most flexible formats. We’re also working with friends and partners like ProBono.Net to identify more legal aid data, on the assumption that the richer the data set, the more inspiring it will be for the hackers to work with. And I’m looking into other ways to make this information available, such as submitting it to the U.S. open data repository at Data.Gov. A big tip of our hat is due to Kate Bladow, who alerted me to the Civic day of Hacking to begin with, aware of how great it would be if we could get our data sets there on time.

Two questions for you:

  1. What kind of mapping mashups would you like to see done with LSC and related data? We can’t tell the developers what to do, but we should be able to tell them what people would love to see, and hopefully inspire them.
  2. Are you a developer? Whether you’re a C++ maven or just somebody who figured out how to save a Google Map, you might enjoy and benefit from participating in the hackathon.  Do consider it.  I’ll be attending the Baltimore day on June 1st. See you there?

Hearts and Mobiles

This post was originally published on the Idealware Blog in March of 2010.

Are Microsoft and Apple using the mobile web to dictate how we use technology? And, if so, what does that mean for us?

Last week, John Herlihy, Google’s Chief of Sales, made a bold prediction:

“In three years time, desktops will be irrelevant.”

Herlihy’s argument was based on research indicating that, in Japan, more people now use smartphones for internet entertainment and research than desktops. It’s hard to dispute that the long predicted “year of the smartphone” has arrived in the U.S., with iPhones, Blackberries and Android devices hitting record sales figures, and Apple’s “magical” iPad leading a slue of mini-computing devices out of the gate.

We’ve noted Apple’s belligerence in allowing applications on their mobile platform that don’t pass a fairly restrictive and controversial screening process. It’s disturbing that big corporations like Playboy get a pass from a broad “no nudity” policy on iPhone apps that a swimwear store doesn’t. But it’s more disturbing that competing technology providers, like Google and Opera, can’t get their call routing and web browsing applications approved either. It’s Apple’s world, and iPhone owners have to live in it (or play dodgeball with each upgrade on their jailbroken devices). And now Microsoft has announced their intention to play the same game. Windows Mobile 7, their “from the ground up” rewrite of their mobile OS, will have an app store, and you will not be able to install applications from anywhere else.

iPhone adherents tell me that the consistency and stability of Apple’s tightly-controlled platform is better than the potentially messy open platforms. You might get a virus. Or you might see nudity. And your experience will vary dramatically from phone to phone, as the telcos modify the user interface and sub in their own applications for the standard ones. There are plenty of industry experts defending Apple’s policies.

What they don’t crow about is the fact that, using the Apple and Microsoft devices, you are largely locked into DRM-only options for multimedia at their stores for buying digital content. They will make most of their smartphone profits on the media that they sell you (music, movies, ebooks), and they tightly control the the information and data flow, as well as the devices you play their content on. How comfortable are you with letting the major software manufacturers control not only what software you can install on your systems, but what kind of media is available to them, as well?

The latest reports on the iPad are that, in addition to not supporting Adobe’s popular Flash format, Google’s Picasa image management software won’t work as well. If you keep your photos with Google, you’d better quickly get them to an Apple-friendly storage service like Apple’s MobileMe or Flickr, and get ready to use iPhoto to manage them.

If your organization, has invested heavily in a vendor or product that Apple and/or Microsoft are crossing off their list, you face a dilemma. Can you just ignore the people using their popular products? Should you immediately redesign your Flash-heavy website with something that you hope Apple will continue to support? If your cause is controversial, are you going to be locked out of a strategic mobile market for advocacy and development because the nature of your work can’t get past the company censors?

I’m nervous to see a major computing trend like mobile computing arise with such disregard for the open nature of the internet that the companies releasing these devices pioneered and grew up in. And I’m concerned that there will be repercussions to moving to a model where single vendors are competing to be one stop hardware, software and content providers. It’s not likely that Apple, Microsoft, Amazon, Google or anyone else is really qualified to determine what each of us want and don’t want to read, watch and listen to. And it’s frightening to think that the future of our media consumption might be tied to their idiosyncratic and/or profit-driven choices.

Swept Up in a Google Wave

This article was originally published on the Idealware Blog in September of 2009.

mailbox.jpg
Photo by Mrjoro.

Last week, I shared my impressions of Google Wave, which takes current web 2.0/Internet staple technologies like email, messaging, document collaboration, widgets/gadgets and extranets and mashes them up into an open communications standard that, if it lives up to Google’s aspirations, will supersede email.  There is little doubt in my mind that this is how the web will evolve.  We’ve gone from:

  • The Yahoo! Directory model – a bunch of static web sites that can be cataloged and explored like chapters in a book, to
  • The Google needle/haystack approach – the web as a repository of data that can be mined with a proper query, to
  • Web 2.0, a referral-based model that mixes human opinion and interaction into the navigation system.

For many of us, we no longer browse, and we search less than we used to, because the data that we’re looking for is either coming to us through readers and portals where we subscribe to it, or it’s being referred to us by our friends and co-workers on social networks.  Much of what we refer to each other is content that we have created. The web is as much an application as it is a library now.

Google Wave might well be “Web 3.0“, the step that breaks down the location-based structure of web data and replaces it completely with a social structure.  Data isn’t stored as much as it is shared.  You don’t browse to sites; you share, enhance, append, create and communicate about web content in individual waves.  Servers are sources, not destinations in the new paradigm.

Looking at Wave in light of Google’s mission and strategy supports this idea. Google wants to catalog, and make accessible, all of the world’s information. Wave has a data mining and reporting feature called “robots”. Robots are database agents that lurk in a wave, monitoring all activity, and then pop in as warranted when certain terms or actions trigger their response.  The example I saw was of a nurse reporting in the wave that they’re going to give patient “John Doe” a peanut butter sandwich.  The robot has access to Doe’s medical record, is aware of a peanut allergy, and pops in with a warning. Powerful stuff! But the underlying data source for Joe’s medical record was Google Health. For many, health information is too valuable and easily abused to be trusted to Google, Yahoo!, or any online provider. The Wave security module that I saw hid some data from Wave participants, but was based upon the time that the person joined the Wave, not ongoing record level permissions.

This doesn’t invalidate the use of Wave, by any means — a wave that is housed on the Doctor’s office server, and restricted to Doctor, Nurse and patient could enable those benefits securely. But as the easily recognizable lines between cloud computing and private applications; email and online community; shared documents and public records continue to blur, we need to be careful, and make sure that the learning curve that accompanies these web evolutions is tended to. After all, the worst public/private mistakes on the internet have generally involved someone “replying to all” when they didn’t mean to. If it’s that easy to forget who you’re talking to in an email, how are we going to consciously track what we’re revealing to whom in a wave, particularly when that wave has automatons popping data into the conversation as well?

The Wave as internet evolution idea supports a favored notion: data wants to be free. Open data advocates (like myself) are looking for interfaces that enable that access, and Wave’s combination of creation and communication, facilitated by simple, but powerful data mining agents, is a powerful frontend.  If it truly winds up as easy as email, which is, after all, the application that enticed our grandparents to sue the net, then it has culture-changing potential.  It will need to bring the users along for that ride, though, and it will be interesting to see how that goes.

——–

A few more interesting Google Wave stories popped up while I was drafting this one. Mashable’s Google Wave: 5 Ways It Could Change the Web gives some concrete examples to some of the ideas I floated last week; and, for those of you lucky enough to have access to Wave, here’s a tutorial on how to build a robot.

Beta Google Wave accounts can be requested at the Wave website.  They will be handing out a lot more of them at the end of September, and they are taking requests to add them to any Google Domains (although the timeframe for granting the requests is still a long one).

Seven Questions For Peter Campbell On Open APIs

This interview was conducted by Holly Ross and first published on the NTEN Blog in July of 2006.

What’s an Application Programming Interface (API)?

APIs are the code in any application that allows for the customization and migration of information in and out of the program’s data store.The API allows your application to interface with other systems in the same manner that a door or data line allows your home to interact with the world around it. APIs were originally developed in the telecom industry, as the need to have computer applications that integrated with telephone systems arose.The concept quickly expanded as a method for companies to merge information in their major systems, such as Finance, Human Resources Constituent Relationship Management (CRM).Common examples of APIs include: importing and exporting data in and out of donor databases; and merging data from multiple sources via the Web, such as gas prices overlapped with Google Maps.Sites that use Google or Yahoo!’s API to merge data are commonly called “web mashups.”

Why would a nonprofit use an Open API?

If you want to do a mailing and your constituents’ addresses lie in multiple systems (donor database, Outlook, Excel, Access), then an API could be used to quickly merge them into one address list.As grant reporting requirements become more stringent, funders want to know what percentage of labor goes into direct service versus overhead, what portion of supply expense is put directly to mission-related use, and what percentage of volunteer time was put to field versus office work.Generating this report requires integrating data from multiple data sources.An API can help automate that task.

What is an Open API, as opposed to a closed one?

An Open API is one that does not restrict you.It gives you full access to your data and the application interface to support your customized needs. A closed API restricts your ability to work with your data.The difference between open and closed APIs is one of degrees, not either/or.The less an application allows you to do with your data, the more closed it is.

Are Open APIs features of Open Source Applications?

Not necessarily.Open Source software comes with code and a license to modify it.An API is an intermediary set of rules that allows you to do customization and integration, even if the source code is encrypted, as with most commercial applications.

Why are Open APIs controversial within the software industry?

Customers often want software that will not lock them out as organizational needs grow and change.Software selection has to be tied to strategic planning, and products need to be adaptable to unforeseen needs.This doesn’t rule out purchasing Microsoft or other (relatively) closed systems, as there can be strategic and economic advantages in standardizing on a vendor.But you need to do so in full awareness of how that software platform will limit your integration and reporting.There are commercial products, such as Salesforce.com, that have wide open APIs, because Salesforce operates on the philosophy that they should not restrict their customers from using their own data as they choose.

How does a nonprofit use APIs if it doesn’t have technical staff with API skills?

There are many programming resources in the nonprofit technology community who will develop low-cost or free applications that work with the API (again, see Salesforce as an example – the AppExchange is a rich collection of free, low-cost add-ons, with many targeted at our community).Look at the work being done with CivicSpace/CivicCRM to support APIs and integration.Even if the actual use of the API is not an in-house function, the existence of an API for a product or web service is still critical.

Should nonprofits advocate for the availability Open APIs among software firms that serve the nonprofit sector?

As funders and constituents demand more accountability from nonprofits, and as nonprofits want to better operate our businesses, it’s more important than ever to have commercial applications that are open to integration with APIs.In the nonprofit community, standardizing on one platform and developing it (the Enterprise Resource Planning (ERP) approach) is often far too ambitious – we lack the funding and resources to make that huge an IT investment.So key to our ability to operate in the information age is our ability to integrate data between our various applications.It’s important that nonprofit leaders encourage software firms that serve the nonprofit sector to make Open APIs available.

Peter Campbell is the Director of Information Technology at Goodwill Industries of San Francisco.