Collaboration is the lynchpin to supporting all of this productivity, learning, experimenting, and knowledge acquisition. This unwritten goal was reinforced by a few tech industry magnates at Stanford’s BiblioTech Symposium last year: the CEOs want liberal arts and humanities doctoral students who can command language, interpret technical jargon into metaphor and narrative, and work collaboratively in team situations. Humanities scholars often think of themselves as the lonely bibliophiles in the library stacks, quietly slaving over monographs. But, Digital Humanities has altered that paradigm — even required that Humanists consider exposing their collaborative work, even if it isn’t digitally-inclined.
My point here is not that there are no philosophers developing digital content or using information technology to further philosophical research: David Bourget’s PhilPapers.org John Immerwahr’s teachphilosophy101.org and Andy Cullison’s sympoze.com are notable examples of excellent and innovative uses of informational technology to advance philosophy. At the same time, there are a number of notable philosophers thinking about the interface of technology and ourselves- David Chalmers, Luciano Floridi and Andy Clark spring to mind.
There are not, however, numerous examples of philosophers using techniques of the digital humanities to _do_ philosophy or using digital tools to teach philosophy.
On an incredibly basic, overly simplified level, philosophy is interested in the discovery, development, classification and analysis of human concepts and reasoning.
Instead, let me suggest for public libraries and the DPLA a new mission and vision, one that taxpayers WILL support for many years to come because no other competitor does it, and because if it is explained and implemented properly (see: nationally) it will build stronger, smarter communities, and ultimately build a stronger, smarter country. In one sentence: public libraries need to support information production with the same level of commitment that they’ve always treated information consumption.
First and foremost, digitization of natural history collections and tools to make these digitized records available, such as VertNet, support global biodiversity research. We suspect that the majority of use of digitized records will be to generate products such as species distribution models and change assessments, and to answer questions about what is in any given museum collection. However, in the broader context of academic endeavor, these data could also serve as a unique link between the digital sciences and the digital humanities. Work in the digital humanities includes everything from crowdsourcing manuscript transcription to humanistic fabrication to data mining — work that is not so dissimilar in method, description, or data type from that in the digital sciences.
The digital humanities in the West, has been biased towards text as the bearer of culture. The foundational stories and early concerns of computing in the humanities are around concording and text analysis. Humanities computing has branched out to digitize other cultural forms, but even so we tend to focus on digitizing and creating databases of tangible cultural artefacts like paintings, archaeological sites, movies, and so on. By contrast, as I have written before, in Japan a large percentage of the traditional arts from the Bunrako to Noh are in the class of intangible cultural property. Intangible cultural traditions are supported aggressively in Japan through support to individual masters and organizations to support for preservation activities.
According to Google Scholar, David Blei’s first topic modeling paper has received 3,540 citations since 2003. Everybody’s talking about topic models. Seriously, I’m afraid of visiting my parents this Hanukkah and hearing them ask “Scott… what’s this topic modeling I keep hearing all about?” They’re powerful, widely applicable, easy to use, and difficult to understand — a dangerous combination.
Since shortly after Blei’s first publication, researchers have been looking into the interplay between networks and topic models. This post will be about that interplay, looking at how they’ve been combined, what sorts of research those combinations can drive, and a few pitfalls to watch out for.
I thought, why not map the places that had Wikipedia articles associated with them, to see what patterns emerged. The results of this excursion are presented below.
DBpedia is the ongoing attempt to transform Wikipedia into a semantically rich and queryable database of human knowledge. It stores much of the categorical information found in Wikipedia articles using RDF triples–simple links for every snippet of data, from the death date of a famous (and sometimes even real) person to the season number of every Simpsons episode, to the latitude and longitude of over half a million articles on a wide variety of subjects.
Jon is the director of the Bill Lane Center for the American West, and he brought with him two undergraduate research assistants, Jenny Rempel
Note from the Editors: Mr. Puschmann authored four excellent recaps of the Berlin 9 conference which are highlighted below. He also compiled a list of other posts which are listed here as well.
Note from the Editors: These posts are part of an ongoing conversation about text-mining and statistical analysis of language. To further investigate the methods used, please follow the links provided by the authors.
“Like Morozov and Lanier, I find a similar Delusion, though one more academically minded, let’s call it the “DH Delusion.” The DH Delusion begins with a similar sort of cyber-utopianism. I remember the excitement of my first Digital History course in which it seemed not only possible, but probable that in a matter of years most scholarship would be produced in the digital medium. The Internet seemed to be promote the sort of intellectual freedom and scholastic democracy that could topple an oppressive and outdated structure of academia.”