DAMS, CMSes, and APIs — oh my!

Like Melinda, I’m a bit of a newbie to DH&Lib, but from the opposite direction.  Having done software development within and around DH for a couple of years, I’d like to work with library collections, but am finding it hard to wrap my head around the different systems libraries use to house digitized material.

For example, the DPLA provides API access to discover library collections, but once you navigate to interesting materials, you find yourself at an institution’s web presence — a DAMS or CMS.  If you’re trying to use the material you find there in a software package–say, in order to load metadata and facsimiles into a crowdsourced transcription tool–you may be able to guess what kind of system you’re dealing with based on the URL, but then what?

I’d like to propose a session showing off different library CMS and DAM systems and how they can be used as the kinds of platforms Tim Sherratt discusses.  I’m no expert–I can show off Omeka’s API, but that’s it–but I’d be happy to lead the discussion if we have enough interest from other participants.

Categories: Session Proposals, Session: Teach |
Profile photo of Ben Brumfield

About Ben Brumfield

Ben Brumfield is an independent software developer in Austin, Texas. In 2005, he began developing one of the first web-based manuscript transcription systems. Released as the open-source tool FromThePage,it has since been used by libraries, museums, and universities to transcribe literary drafts, military diaries, herpetology field notes, and punk rock fanzines. Ben has been covering crowdsourced transcription technologies on his blog since 2007. In 2008, he attended the first THATCamp at George Mason University's CHNM. Inspired by the experience, he co-organized the first regional THATCamp, THATCamp Austin in 2009. Conversations at THATCamp AHA 2012 led him to leave his position in industry and become a Digital Humanities software engineer full-time.

One Response to DAMS, CMSes, and APIs — oh my!

  1. Ben,

    I’d be happy to help with this one. For probably 80% of systems, I can identify the system within about 30 seconds after visiting it, and I can share the visual clues that tip me off. More broadly, however, I think this speaks to a different need (for DPLA, or for statewide and regional collaboratives), which is a directory of collections with system information. I recently did a project for the Oregon State Library in which I compiled a lot of that information, as I have done in the past for Texas. The spreadsheet with Oregon data is on my website (www.dcplumer.com/wp-content/uploads/2013/07/Inventory-Oregon-DigitalCollections-final.xlsx), but to be truly useful the data needs to be machine actionable. How might this work?


Comments are closed.