Institute of the Humanities and Global Cultures hosted a daylong symposium on “The Humanities in a Digital Age.”
Spatializing photographic archives, a project funded by the National Endowment for the Humanities, entailed the release of open source software for recovering the 3D geometry of a location from photographs taken from diverse angles (and even at different times); and a case study of Richard Misrach’s landscape photography demonstrating the value of this approach for scholars.
We’ve now completed an extensive and carefully illustrated White Paper for this NEH-sponsored project, a large pdf of which you may find here. (26.5mb).
The White Paper describes the open-source software tool we’ve developed, and our reasons for wanting to forge a new approach to making digital tool for scholars. It also examines the implications of our approach for photography.
Last week, the Walker Art Center launched a major website redesign, which museum geeks are hailing as “a potential paradigm shift for institutional websites,” (Seb Chan) and an “earth-shaking game changer” (Museumnerd). Here’s what I see: a website as a unique core offering–alongside, but not subservient to, the physical institution. Walkerart.org is not about the Walker Art Center. It is the Walker Art Center, in digital form.
The new site resembles an online newspaper, featuring articles written by Walker staff alongside stories from the greater world of art reporting on the web.
Editors’ Note: The University of Michigan, host of the Humanities, Arts, Science, and Technology Advanced Collaboratory (HASTAC) conference held last week, has released videos of the keynote addresses. A round-up of the conference is available here, and a full list of keynote videos is available here.
A common problem in search and exploration interfaces is the vocabulary problem. This refers to the great variety of words with which different people can use to describe the same concept. For people exploring a text collection, this makes search difficult. There are only a limited number different queries they can think of to describe that concept, but they may be missing many other instances that use different words. This is an important issue for humanities scholars. Often, the very first step of a literature analysis is to comb through text, trying to find thought-provoking examples to study later.
In this post, I give an example of how our project WordSeer, a text analysis environment for humanities scholars, can be used to overcome this problem.
Just as THATCamp challenges attendees to set and steer the agenda, Startup Weekend leaves a lot up to the participants, who have 54 hours to pitch a product idea (typically tech-related), form teams, validate their idea, develop a business model, and put together a demo and a longer pitch.
Some might wonder what entrepreneurship training has to do with the digital humanities (DH), but I believe that the two communities have much in common and can learn from each other…. While DH projects typically don’t form companies and don’t aim to make a profit, most do need to consider how to define their value, find users and sustain themselves.
Searching for information might seem like one of the most routine and commonplace activities of university life. However, as students work within an information environment that is increasingly open and dynamically changing, research assignments also represent a complex and potentially daunting task, and one that is fraught with embedded social and cultural processes and relationships.
The Ethnographic Research in Illinois Academic Libraries (ERIAL) Project was a two-year study of student research practices involving a collaborative effort of five Illinois universities…. Using a mixed-methods approach that integrated nine qualitative research techniques and included over 600 participants, the ERIAL project sought to gain a better understanding of undergraduates’ research processes based on first-hand accounts of how they obtained, evaluated, and managed information for their assignments.
There are lots of tools out there that aggregate existing information and even organize it for users to interpret. Since the early Hypercities, GIS tools, for instance, have been very much the rage among humanists who wish to add geographical and census data to enhance the “lived experience” of a text. But there are fewer tools that actually build an archive of live interpretation—as opposed to facts layered and ready for interpretation–around a stable text. And that’s where what I call “Reading with the Stars” comes in.
And this brings me to the danger inherent in Culturomics. First, machine-readable texts do not, and will never, represent the totality of the human experience. What about paintings, illustrations, and photographs, statues and figurative art, architecture, music, material culture, and ecology? What about oral history? What about economic, statistical and demographic evidence? Although there are millions upon millions of books, magazines, newspapers, and other printed material, they represent only the visible, privileged, literate tip of a vast store of human culture.