What we ended up with was a new way of seeing and understanding the records — not as the remnants of bureaucratic processes, but as windows onto the lives of people. All the faces are linked to copies of the original certificates and back to the collection database of the National Archives. So this is also a finding aid. A finding aid that brings the people to the front.
According to Margaret Hedstrom the archival interface ‘is a site where power is negotiated and exercised’. Whether in a reading room or online, finding aids or collection databases are ‘neither neutral nor transparent’, but the product of ‘conscious design decisions’. We would like to think that this interface gives some power back to the people within the records. Their photographs challenge us to do something, to think something, to feel something. We cannot escape their discomfiting gaze.
But this interface represents another subtle shift in power. We could create it without any explicit assistance or involvement by the National Archives itself. Simply by putting part of the collection online, they provided us with the opportunity to develop a resource that both extends and critiques the existing collection database. Interfaces to cultural heritage collections are no longer controlled solely by cultural heritage institutions.
It’s these two aspects of the power of interfaces that I want to focus on today.
Read Full Post Here
Editors’ Note: Many scholars working in the Digital Humanities are thinking about the theory, design, and social and pedagogical impact of games. The posts below cover some of the variety of issues within this field. Further discussion will occur at THATCamp Games, January 20-22, 2012 at the University of Maryland-College Park. Please Tweet @dhnow or email dhnow [at] pressforward [dot] org if you have more to suggest. *updated 12/1/11*
Shawn Graham, Lies & Gamification, November 29, 2011
- Gamification – love it or hate it, any time you use some sort of game mechanic, you’ve done it. Which makes ‘being a student’ the ultimate game of all. Write an essay, do a mid term, ace the final, level up to the next course, forget the previous course’s content…I’ve written before about ‘gamifying my historian’s craft‘, about why I gamified the course website, what my ‘achievements’ were, and how they tied to the course content, and my larger paedegogical goals…. I’m just finishing up a second round of my gamified HIST2809, ‘The Historian’s Craft’ course….But I wanted more. Perhaps what I needed, in addition to gamification of the practical hands-on practice part of being an historian, was some game based learning. Read Full Post Here.
Gabe Zichermann, The Six Rules of Gamification, November 29, 2011
- While every experience is (and should be) experientially different, there are six new rules that I’ve distilled from my work. We can use these as an excellent jumping off point for the gamification design process:
- Understand what constitutes a “win” for the organization/sponsor
- Unpack the player’s intrinsic motivation and progress to mastery
- Design for the emotional human, not the rational human.
- Develop scalable, meaningful intrinsic and extrinsic rewards
- Use one of the leading platform vendors to scale your project (don’t roll your own)
- Most interactions are boring: make everything a little more fun
Read Full Post Here.
Geoff Kaufman, Mind/Games #1: Reducing Implicit Bias with Games, November 23, 2011
- Given that one of the major goals of Tiltfactor’s current research is to design games aimed at reducing implicit bias held toward (or by) women in science, technology, engineering, and math (STEM), I thought it would be worthwhile to take a step back and discuss what psychologists have discovered about implicit bias – and how games might be an especially powerful means of reducing or combating it. Read Full Post Here.
Gabe Zichermann, Kids, Brains and Games: a Ted Talk, November 21, 2011
- Because gamified design relies heavily on behavioral economics and psychology (as well as game design and loyalty), I’ve found myself spending a great deal of time in familiar (but substantially updated) territory: thinking about the inherent skills and abilities of people and how to motivate them to change. Much of the science has been radically rethought (including brain plasticity, the extent of which has only recently been revealed), but much of it is fundamentally the same. If we see the complex interplay of hereditary and environmental (or intrinsic and extrinsic) factors on a continuum, we will be best able to design gamified experiences that motivate the change we want to see. Read Full Post Here.
Maryland Institute for Technology in the Humanities, “Archive Ahoy!”: A Dive into the World of Game Preservation, November 4, 2011
- While the preservation process of digital games up to now has been mostly ad-hoc, currently there is a huge interest among libraries to build an archive of digital games. By asking what the artifact is, and what aspects of the game must be documented, [Preserving Virtual Worlds 2] is coming up with a set of best practices for the preservation of digital games for those institutions that seek to archive and collect these significant digital materials. Read Full Post Here.
Geoffrey Rockwell, Ritsumeikan Center for Game Studies, October 26, 2011
Michael Douma, What is Gamification?, October 20, 2011
- Gameplay has a lot to teach us about motivating participation through joy. ‘Gamification’ is a new term, coined in 2008, for adapting game mechanics into non-game setting — such as building online communities, education and outreach, marketing, or building educational apps. Here are some ideas for how to do it. Read Full Post Here.
Ted Underwood has been talking up the advantages of the Mann-Whitney test over Dunning’s Log-likelihood which is currently more widely used. I’m having trouble getting M-W running on large numbers of texts as quickly as I’d like, but I’d say that his basic contention–that Dunning log-likelihood is frequently not the best method–is definitely true, and there’s a lot to like about rank-ordering tests.
Before I say anything about the specifics, though, I want to make a more general point first, about how we think about comparing groups of texts.The most important difference between these two tests rests on a much bigger question about how to treat the two corpuses we want to compare.
Are they a single long text? Or are they a collection of shorter texts, which have common elements we wish to uncover? This is a central concern for anyone who wants to algorithmically look at texts: how far can we can ignore the traditional limits between texts and create what are, essentially, new documents to be analyzed? There are extremely strong reasons to think of texts in each of these ways.
Read Full Post Here
Collaboration is the lynchpin to supporting all of this productivity, learning, experimenting, and knowledge acquisition. This unwritten goal was reinforced by a few tech industry magnates at Stanford’s BiblioTech Symposium last year: the CEOs want liberal arts and humanities doctoral students who can command language, interpret technical jargon into metaphor and narrative, and work collaboratively in team situations. Humanities scholars often think of themselves as the lonely bibliophiles in the library stacks, quietly slaving over monographs. But, Digital Humanities has altered that paradigm — even required that Humanists consider exposing their collaborative work, even if it isn’t digitally-inclined. Paul Fyfe even proposes that teaching can assume the tenets of Digital Pedagogy without pushing an ON button. Adding to that conversation, I propose that undergraduates and master’s students can offer intriguing, if not altogether unique, perspectives to work in Digital Humanities — beyond the limitations of classroom-specific assignments. That life-long learning that could translate so well to economic/employment success.
Read Full Post Here
My point here is not that there are no philosophers developing digital content or using information technology to further philosophical research: David Bourget’s PhilPapers.org John Immerwahr’s teachphilosophy101.org and Andy Cullison’s sympoze.com are notable examples of excellent and innovative uses of informational technology to advance philosophy. At the same time, there are a number of notable philosophers thinking about the interface of technology and ourselves- David Chalmers, Luciano Floridi and Andy Clark spring to mind.
There are not, however, numerous examples of philosophers using techniques of the digital humanities to _do_ philosophy or using digital tools to teach philosophy.
On an incredibly basic, overly simplified level, philosophy is interested in the discovery, development, classification and analysis of human concepts and reasoning. We teach texts, concepts, arguments, and the historical and social development and influence of such texts, concepts and arguments.
All of these tasks are amenable to the digital humanities. The concepts and reasoning structures common in digital environments are accessible for philosophic analysis, and the tools developed to analyze and archive literature and language can clearly be adapted to philosophic work.
Here I’ll suggest, in broad outlines, a number of areas in which I believe philosophy can, and should, contribute to the digital humanities. These suggestions are by no means exhaustive.
Read Full Post Here
… Instead, let me suggest for public libraries and the DPLA a new mission and vision, one that taxpayers WILL support for many years to come because no other competitor does it, and because if it is explained and implemented properly (see: nationally) it will build stronger, smarter communities, and ultimately build a stronger, smarter country. In one sentence: public libraries need to support information production with the same level of commitment that they’ve always treated information consumption…
…What I’d really like to hear at future meetings are some ideas about how the DPLA movement can incorporate the ideas and the energy that all of these other independent projects have, and how that kind of work can be supported on a national scale without losing the local flavor that remains so important in communities. I’d like to hear less about digitization, which is not to say it is unimportant, but it is to say that preserving the past is probably the least imaginative step forward public libraries can take into the digital future right now. So, in conclusion, here’s what I believe this movement needs next: the DPLA needs a public-facing laboratory; an experimental beta space where we can prototype ideas, curricula, interfaces, strategies, and experiences. I know I’m not alone in wanting this, I’ve had many conversations in which this has come up. Look for future posts describing this beta space.
Read Full Post Here
First and foremost, digitization of natural history collections and tools to make these digitized records available, such as VertNet, support global biodiversity research. We suspect that the majority of use of digitized records will be to generate products such as species distribution models and change assessments, and to answer questions about what is in any given museum collection. However, in the broader context of academic endeavor, these data could also serve as a unique link between the digital sciences and the digital humanities. Work in the digital humanities includes everything from crowdsourcing manuscript transcription to humanistic fabrication to data mining — work that is not so dissimilar in method, description, or data type from that in the digital sciences.
Our question is a simple one: Where do the digital humanities and e-science overlap and interconnect? One method of digital investigation that caught our attention is the mapping of novels and other historic texts; researchers take prose text and mine it for mappable units…. This made us think: what sorts of questions could we ask of a data set composed of all kinds of georeferences — not just species occurrence records, but locations from history or works of fiction as well? If students of the humanities can create maps with such texture using similarly organized data sets, could they build on this richness by including analysis of the natural world as it existed at the time described in the novel? Perhaps searching on the VertNet portal (or GBIF or ALA) could provide a detailed list of vertebrate species and, with a little more work, the associated ranges of these species. Suddenly, the map of Mrs. Dalloway’s world, and the atmosphere of Clarissa’s party, can be enriched not only with human influence and creation, but by the natural environment, too. Conversely, data from diaries or other digitized sources could be mined for data about distributions of now-extinct species. Could these data be used as observations and published as records along with those from natural history collections?
Read Full Post Here.
The digital humanities in the West, has been biased towards text as the bearer of culture. The foundational stories and early concerns of computing in the humanities are around concording and text analysis. Humanities computing has branched out to digitize other cultural forms, but even so we tend to focus on digitizing and creating databases of tangible cultural artefacts like paintings, archaeological sites, movies, and so on. By contrast, as I have written before, in Japan a large percentage of the traditional arts from the Bunrako to Noh are in the class of intangible cultural property. Intangible cultural traditions are supported aggressively in Japan through support to individual masters and organizations to support for preservation activities. The challenge with intangible heritage, however, is that you are not preserving an object, but a process or tradition of teaching a process…
I see this an illustration of a larger point – we always lose information when digitizing, even when digitizing text. We are always making choices about what to capture, what resolution to capture at and what contextual information to add as enrichment. In text digitization we have lost sight of the materiality of text and all that is lost because we work in a tradition that sees the string (sequence of characters) as what is important. A new edition of Hamlet with an edited text in a different material form is treated ontologically as Hamlet. This is not the case with animated Noh. It is not a Noh performance, partly because it was not developed in the traditional way.
Read Full Post Here.
According to Google Scholar, David Blei’s first topic modeling paper has received 3,540 citations since 2003. Everybody’s talking about topic models. Seriously, I’m afraid of visiting my parents this Hanukkah and hearing them ask “Scott… what’s this topic modeling I keep hearing all about?” They’re powerful, widely applicable, easy to use, and difficult to understand — a dangerous combination.
Since shortly after Blei’s first publication, researchers have been looking into the interplay between networks and topic models. This post will be about that interplay, looking at how they’ve been combined, what sorts of research those combinations can drive, and a few pitfalls to watch out for. I’ll bracket the big elephant in the room until a later discussion, whether these sorts of models capture the semantic meaning for which they’re often used. This post also attempts to introduce topic modeling to those not yet fully
converted aware of its potential.
Read Full Post Here
I thought, why not map the places that had Wikipedia articles associated with them, to see what patterns emerged. The results of this excursion are presented below.
DBpedia is the ongoing attempt to transform Wikipedia into a semantically rich and queryable database of human knowledge. It stores much of the categorical information found in Wikipedia articles using RDF triples–simple links for every snippet of data, from the death date of a famous (and sometimes even real) person to the season number of every Simpsons episode, to the latitude and longitude of over half a million articles on a wide variety of subjects.
Jon is the director of the Bill Lane Center for the American West, and he brought with him two undergraduate research assistants, Jenny Rempel & Judee Burr. I showed them how to perform simple spatial queries to get Wikipedia articles located in San Francisco and we discussed what this data may mean and the thorny issues that we may need to account for in its use.
Read Full Post Here