There are lots of tools out there that aggregate existing information and even organize it for users to interpret. Since the early Hypercities, GIS tools, for instance, have been very much the rage among humanists who wish to add geographical and census data to enhance the “lived experience” of a text. But there are fewer tools that actually build an archive of live interpretation—as opposed to facts layered and ready for interpretation–around a stable text. And that’s where what I call “Reading with the Stars” comes in.

And this brings me to the danger inherent in Culturomics. First, machine-readable texts do not, and will never, represent the totality of the human experience. What about paintings, illustrations, and photographs, statues and figurative art, architecture, music, material culture, and ecology? What about oral history? What about economic, statistical and demographic evidence? Although there are millions upon millions of books, magazines, newspapers, and other printed material, they represent only the visible, privileged, literate tip of a vast store of human culture.

The CUNY Digital Humanities Initiative has released video from two recent events:

Digital Humanities in the Library, November 18, 2011

  • Ben Vershbow (NYPL) on “NYPL Labs: Hacking the Library”

Digital Humanities in the Classroom, October 18, 2011

  • Mark Sample, “Building and Sharing When You’re Supposed to be Teaching”
  • Shannon Mattern, “Beyond the Seminar Paper: Setting New Standards for New Forms of Student Work”

Watch the videos here.

 

 

What we ended up with was a new way of seeing and understanding the records — not as the remnants of bureaucratic processes, but as windows onto the lives of people. All the faces are linked to copies of the original certificates and back to the collection database of the National Archives. So this is also a finding aid. A finding aid that brings the people to the front.

According to Margaret Hedstrom the archival interface ‘is a site where power is negotiated and exercised’.[1] Whether in a reading room or online, finding aids or collection databases are ‘neither neutral nor transparent’, but the product of ‘conscious design decisions’. We would like to think that this interface gives some power back to the people within the records.

Editors’ Note: Many scholars working in the Digital Humanities are thinking about the theory, design, and social and pedagogical impact of games. The posts below cover some of the variety of issues within this field. Further discussion will occur at THATCamp Games, January 20-22, 2012 at the University of Maryland-College Park. Please Tweet @dhnow or email dhnow [at] pressforward [dot] org if you have more to suggest. *updated 12/1/11*

Ted Underwood has been talking up the advantages of the Mann-Whitney test over Dunning’s Log-likelihood which is currently more widely used. I’m having trouble getting M-W running on large numbers of texts as quickly as I’d like, but I’d say that his basic contention–that Dunning log-likelihood is frequently not the best method–is definitely true, and there’s a lot to like about rank-ordering tests.

Before I say anything about the specifics, though, I want to make a more general point first, about how we think about comparing groups of texts.The most important difference between these two tests rests on a much bigger question about how to treat the two corpuses we want to compare.

Collaboration is the lynchpin to supporting all of this productivity, learning, experimenting, and knowledge acquisition. This unwritten goal was reinforced by a few tech industry magnates at Stanford’s BiblioTech Symposium last year: the CEOs want liberal arts and humanities doctoral students who can command language, interpret technical jargon into metaphor and narrative, and work collaboratively in team situations. Humanities scholars often think of themselves as the lonely bibliophiles in the library stacks, quietly slaving over monographs. But, Digital Humanities has altered that paradigm — even required that Humanists consider exposing their collaborative work, even if it isn’t digitally-inclined.

My point here is not that there are no philosophers developing digital content or using information technology to further philosophical research: David Bourget’s PhilPapers.org John Immerwahr’s teachphilosophy101.org and Andy Cullison’s sympoze.com are notable examples of excellent and innovative uses of informational technology to advance philosophy. At the same time, there are a number of notable philosophers thinking about the interface of technology and ourselves- David Chalmers, Luciano Floridi and Andy Clark spring to mind.

There are not, however, numerous examples of philosophers using techniques of the digital humanities to _do_ philosophy or using digital tools to teach philosophy.

On an incredibly basic, overly simplified level, philosophy is interested in the discovery, development, classification and analysis of human concepts and reasoning.

Instead, let me suggest for public libraries and the DPLA a new mission and vision, one that taxpayers WILL support for many years to come because no other competitor does it, and because if it is explained and implemented properly (see: nationally) it will build stronger, smarter communities, and ultimately build a stronger, smarter country.  In one sentence: public libraries need to support information production with the same level of commitment that they’ve always treated information consumption.

First and foremost, digitization of natural history collections and tools to make these digitized records available, such as VertNet, support global biodiversity research.  We suspect that the majority of use of digitized records will be to generate products such as species distribution models and change assessments, and to answer questions about what is in any given museum collection.  However, in the broader context of academic endeavor, these data could also serve as a unique link between the digital sciences and the digital humanities.  Work in the digital humanities includes everything from crowdsourcing manuscript transcription to humanistic fabrication to data mining — work that is not so dissimilar in method, description, or data type from that in the digital sciences.