Like Melinda, I’m a bit of a newbie to DH&Lib, but from the opposite direction. Having done software development within and around DH for a couple of years, I’d like to work with library collections, but am finding it hard to wrap my head around the different systems libraries use to house digitized material.
For example, the DPLA provides API access to discover library collections, but once you navigate to interesting materials, you find yourself at an institution’s web presence — a DAMS or CMS. If you’re trying to use the material you find there in a software package–say, in order to load metadata and facsimiles into a crowdsourced transcription tool–you may be able to guess what kind of system you’re dealing with based on the URL, but then what?
I’d like to propose a session showing off different library CMS and DAM systems and how they can be used as the kinds of platforms Tim Sherratt discusses. I’m no expert–I can show off Omeka’s API, but that’s it–but I’d be happy to lead the discussion if we have enough interest from other participants.
Ben,
I’d be happy to help with this one. For probably 80% of systems, I can identify the system within about 30 seconds after visiting it, and I can share the visual clues that tip me off. More broadly, however, I think this speaks to a different need (for DPLA, or for statewide and regional collaboratives), which is a directory of collections with system information. I recently did a project for the Oregon State Library in which I compiled a lot of that information, as I have done in the past for Texas. The spreadsheet with Oregon data is on my website (www.dcplumer.com/wp-content/uploads/2013/07/Inventory-Oregon-DigitalCollections-final.xlsx), but to be truly useful the data needs to be machine actionable. How might this work?
Danielle