By Scott B. Weingart
What I really want to highlight, though, is a brand new feature introduced by Wallace Hooper: automated Latent Semantic Analysis (LSA) of the entire corpus. For those who are not familiar with it, LSA is somewhat similar LDA, the algorithm driving the increasingly popular Topic Models used in Digital Humanities. They both have their strengths and weaknesses, but essentially what they do is show how documents and terms relate to one another.

The Media Lab welcomes applications from candidates interested in establishing research programs in: music, performance, arts, design, food, fashion, architecture, games, things we have not thought of, or any combination thereof. Appointments will be within the Media Arts and Sciences academic program, principally at the assistant professor level.

In 2012, NINES (Networked Infrastructure for Nineteenth-century Electronic Scholarship) at the University of Virginia will be hosting the second of two NEH Summer Institutes in Advanced Topics in the Digital Humanities. The topic is “Evaluating Digital Scholarship,” and we are specifically inviting current and incoming Department Chairs in English, Foreign Languages, and Classics to participate.

Interesting discussion of how to use IRC channels to show people how much Wikipedia is actively curated, without requiring them to reload the recent changes page, connect to some cryptic IRC channels, or dig around in some (wonderfully) detailed statistics. More importantly, could it be done in a playful way?

HASTAC’s fifth international conference, hosted this year by the University of Michigan in Ann Arbor, December 1-3, practices what it preaches, experimenting with an array of new forms and formats designed not just to discuss “Digital Scholarly Communication” but to explore how each of those three terms–digital, scholarly, communication–changes the others in ways that presage powerful new possibilities for higher education (both in the academy and for the general public).