Needed: an open data standard for volunteer opportunities

I was chatting today with my friend Sameer about the challenges and opportunities in volunteer management software and had a bit of a realization: it’s crazy that we don’t have an open data standard for volunteer opportunities, so that organizations can publish a machine-readable list of volunteer opportunities on their websites, and let them get picked up and syndicated by services like VolunteerMatch and Idealist that specialize in aggregating and curating volunteer opportunities.

I’m thinking of something like RSS (or even better, ATOM), which provides a simple, open standard for publishing information about articles on websites so that they could easily be picked up, remixed and syndicated to reach a far larger audience.

Let’s call it “VSS” (Volunteer Syndication Standard). I haven’t thought about this deeply, and I’m no expert on designing protocols like this, but I would start by seriously examining ATOM, the most modern RSS-like standard for publishing articles. I’d also look at hATOM for inspiration about how to embed machine-readable data directly into a standard webpage. EDIT: Probably also .ics (the standard for event syndication, because volunteer opportunities often–but not always–resemble events.)

It would be hard to inspect one’s navel to design this right, so I’m not even going to try. But I’d definitely definitely want to include folks like:

  • Organizations that publish lots of volunteer opportunities
  • Organizations that aggregate and curate volunteer opportunities or recruit volunteers for many organizations
  • Makers of volunteer management software (or other tools that let groups publish volunteer opportunities online–this could include major CMS platforms, for example)

I think that a standard like this, if sufficiently widely adopted, could unlock a huge amount of innovation in how organizations (and intermediaries) recruit volunteers, especially if it was coupled with another set of standards for intermediaries to use to push data about volunteers directly into groups’ volunteer management databases.





WSDOT traffic data: missing in action

With the recent start of tolling on SR-520 here in Seattle, the public’s attention is suddenly on traffic volumes on 520 and I-90.  So, this morning, I went over to the WSDOT website to see if I could find a simple listing of traffic volumes for the past few weeks.  Nothing, just a few random numbers sprinkled in their press releases.

Obviously, WSDOT is collecting this data.  It’s ridiculous that it’s not being published in formats that would make it easy to read and analyze.  What a huge open government data fail.

ALEC model corporate legislation dump = a huge open data opportunity for automated corruption detection

I was just skimming through “ALEC Exposed,” a fantastic and disturbing document dump from the Center for Media and Democracy, which for the first time shows us over 800 pieces of “model legislation” drafted by and for corporations through the American Legislative Exchange Council (ALEC).  Hooray for whistleblowers!  And hooray for CMD for doing some great analysis and publishing to add context and show this stuff for the corruption of democracy that it is.

But I can’t help but think there’s a huge missed opportunity here to take it to the next level.  Imagine if all of these “model bills” were available via a machine-queryable API.  It would then be pretty easy to write a web-based system that would continuously monitor the APIs of state legislatures for new bills being introduced that had significant textual concordance with the ALEC model legislation.  Automated corporate influence detection!

Seattle’s open data website: unfulfilled potential formally launched nearly two months ago, with a flurry of press releases, an initial batch of “60 datasets” and the promise of more.  I thought I’d take a quick look and see how the platform and the community are evolving.  Unfortunately, its potential is still mostly unfulfilled.  I’m bummed.

About the only positive thing I can find to say at this point is that the city wisely chose not to waste a bunch of staff time rolling its own data website platform, but outsourced the actual web-app problem to local startup  Socrata’s app is not terribly beautiful and quite frankly a bit clunky in many places, but it has a pretty decent feature set, and the city is reportedly only paying about $1000/month for hosting services.  That’s a pretty good deal for the functionality Socrata provides, and it frees the city from maintaining yet another web app.  Plus, it gives the City some hope of future feature/functionality upgrades as the Socrata codebase continues to mature.

That’s about all end of the praise I can dish out, though.  So far, is mostly disappointing and already looks stagnant, as if it was simply intended to tick off a checkbox on someone’s list of “Gov 2.0” initiatives, rather than as the opening salvo in a genuine, well-resourced commitment to opening the trove of public data that, as the Mayor reminded us, we’ve already paid for.

I hope that’s not the case, and I know that the city has got a long list of priorities and a big budget crisis on its hands.  So in that spirit, I’ll offer some thoughts about how I think can leap beyond its initial stumbles.

1) Get more worthwhile data up there. The launch publicity claimed to have 60+ datasets, and the site now claims to have over 140 datasets.  Unfortunately, it turns out that there are actually only 10 unique data sets on  The rest of the “140” sets are just filtered views and maps of data contained in a couple of those initial 10 datasets.  I know that a number of city agencies are working on prepping more data for release, but 10 datasets in two months, most of which were already available online just doesn’t seem like a very robust launch.  Worse, the 10 “real” datasets are almost impossible to find buried in the much larger, noiser list of filtered views and maps.  I’m looking forward to participating in a meeting with the City’s GIS team next week to offer some suggestions to their efforts to open their data.  But much, much more is needed.

2) Make sure the data that you post is actually usable.  At first, I was excited to see that the two of the initial launch datasets are the list of active building permits and land use permits.  Unfortunately, neither of these datasets includes a date field, so it’s impossible for anyone to build an app that, say, offers to alert users whenever there’s a new permit issued in their neighborhood.

3) Open up the process. There’s nobody identified as the project manager  of  No way to leave comments on datasets, or on the site, except for an email address.   No roadmap.  No space for conversation.   How about identifying someone as the “project manager” for the site, so we know who to contact?  Maybe even a little blog about the evolution of the site, a place for feedback and conversation?  The ability to leave a comment on a dataset, so the dataset owners can understand how it can be improved?  Open data isn’t just about the data, it’s about opening a conversation.   There is a “Suggested Datasets” tab on the whole page, but it’s got zero content, and I can’t tell if it it’s supposed to be a way to request additional datasets or what.  At a minimum it needs some explanation and some less confusing language.

4)  Get out there and promote it.  The site’s published stats show only a tiny amount of views on most datasets.  This is partially because most of the datasets are garbage right now, and I suspect partially because of some obvious UI problems with Socrata.  But a quick Google search turns up almost zero coverage or conversation about, besides the few bits of launch-day PR that the city churned out.  Get out there and start a public conversation about this, folks!

I have to confess, I’m pretty discouraged by so far.  It appears to be stumbling badly out of the gate, both in style and in substance.   It’s hard to see how the City is in position to launch an “Apps for Seattle” contest this summer with such a limited set of low-quality data, and zero visibility/conversation about the process.  The City is going to need to put a bit more effort and focus behind this, and to do it a lot more transparently, if it wants to build a thriving open civic data ecosystem here in Seattle.