By Tiltfactor at Dartmouth College | May 21, 2013
Metadata Games is an online game system for gathering useful data on photo, audio, and moving image artifacts, enticing those who might not visit archives to explore humanities content while contributing to vital records. Furthermore, the suite enables archivists to gather and analyze information for image archives in novel and possibly unexpected ways.
Check out Metadata Games Here.
By Tricia Wang | May 16, 2013
Big Data can have enormous appeal. Who wants to be thought of as a small thinker when there is an opportunity to go BIG?
The positivistic bias in favor of Big Data (a term often used to describe the quantitative data that is produced through analysis of enormous datasets) as an objective way to understand our world presents challenges for ethnographers. What are ethnographers to do when our research is seen as insignificant or invaluable? Can we simply ignore Big Data as too muddled in hype to be useful?
No. Ethnographers must engage with Big Data. Otherwise our work can be all too easily shoved into another department, minimized as a small line item on a budget, and relegated to the small data corner. But how can our kind of research be seen as an equally important to algorithmically processed data? What is the ethnographer’s 10 second elevator pitch to a room of data scientists?
Big Data produces so much information that it needs something more to bridge and/or reveal knowledge gaps. That’s why ethnographic work holds such enormous value in the era of Big Data.
Lacking the conceptual words to quickly position the value of ethnographic work in the context of Big Data, I have begun, over the last year, to employ the term Thick Data (with a nod to Clifford Geertz!) to advocate for integrative approaches to research. Thick Data uncovers the meaning behind Big Data visualization and analysis.
Thick Data analysis primarily relies on human brain power to process a small “N” while big data analysis requires computational power (of course with humans writing the algorithms) to process a large “N”. Big Data reveals insights with a particular range of data points, while Thick Data reveals the social context of and connections between data points. Big Data delivers numbers; thick data delivers stories. Big data relies on machine learning; thick data relies on human learning.
Read Full Post Here
By Michael Kramer | May 14, 2013
My response to OPEN THREAD: THE DIGITAL HUMANITIES AS A HISTORICAL “REFUGE” FROM RACE/CLASS/GENDER/SEXUALITY/DISABILITY?, http://dhpoco.org/2013/05/10/open-thread-the-digital-humanities-as-a-historical-refuge-from-raceclassgendersexualitydisability/#comment-1907:
This is a rich and multifaceted discussion. I just want to add one observations that it has made me ponder.
The discussion has made me think about the metaphor of “tools” in digital humanities work. This makes sense, because the word “tools” is full of all kinds of associations. It is most of all a way to imaginatively bridge the troubling gap between the individual and the machine, between older modes of production and autonomy and newer ones. It alludes to the romantic vision of handicraft labor; it hints at masculinized visions of work (in ways that Natalia Cecire convincingly proposes are profoundly gendered); it offers a way to overcome questions of scale between the individual and larger, overwhelming, and dehumanizing structural forces (tools are held in the hand, safe and under control, mastered; machines are scary, semi-autonomous, out of control things, watch out, it’s Frankenstein!!); tools are at once something from the deep past, often conceptualized as what makes us human–as in it is our species nature to create and use them–and they are also somehow futuristic, key to a cyborgian vision of humans fusing with machines; tools are a way of mediating between, navigating between, compromising between culture and counterculture (I’m thinking of Fred Turner’s great work here on Stewart Brand in From Counterculture to Cyberculture), as in tools sound are ostensibly small and mobile enough that they can be wielded in an oppositional way by the marginalized, oppressed, impoverished even though it takes the massive infrastructure of modernity, military-industrial style, to create them; and while we’re at it, let’s make a bad joke about tools and all the sexual innuendo therein.
Read Full Post Here
By Adeline Koh and Roopika Risam | May 14, 2013
Read David Golumbia’s post on the “Dark Side of the Digital” conference yesterday? Consider this:
In 2007, Martha Nell Smith observed:
When I first started attending humanities computing conferences in the mid-1990s, I was struck by how many of the presentations remarked, either explicitly or implicitly, that concerns that had taken over so much academic work in literature—of gender, race, class, sexuality—were irrelevant to humanities computing. For those not held back by the sentimental and simplistic question of whether books would be displaced by electronic media, the field of humanities computing brought the models and rigors of science to the intellectual work of literary and artistic criticism and theory, and in that fulfilled some new critical dreams of bringing objectivity, rational thought, and aesthetic purity to departments of English.Scientific matters of mathematics and computation, objective and hard, do not seem to be subject to the concerns of gender, race, or sexuality. 2 + 2, so the reasoning goes, always equals 4, whether you are black, a woman, a queer, a straight, or whatever. HTML, SGML, XML—the codes that make words and images, texts, processable—and TEI conformancy are supposedly gender-, race-, class-neutral.The codes always work, and the principles always apply, whatever one’s personal identity or social group (or so many seemed to believe). It was as if these matters of objective and hard science provided an oasis for folks who do not want to clutter sharp, disciplined, methodical philosophy with considerations of the gender-, race- and class-determined facts of life. After all, in the wake of the sixties, the humanities in general and their standings in particular had suffered, according to some, from being feminized by these things. Humanities computing seemed to offer a space free from all this messiness and a return to objective questions of representation.” (4)
Martha Nell Smith, “The Human Touch Software of the Highest Order: Revisiting Editing as Interpretation.” Textual Cultures: Texts, Contexts, Interpretation, 2(1):2007, 1-15. Full text available for free here.
In your view, how much of this has changed since Smith’s article was published, if anything?
- What is your perspective on the intermingling of race, class, gender, sexuality and disability and the digital humanities?
- What are your “core” texts of the digital humanities, and how do they engage with race, class, gender, sexuality and disability?
- How are cultures of technology implicated in imperial projects? Is there existing DH work on digital colonialisms?
- How would you write a genealogy of the digital humanities?
- How should the digital humanities adapt and change, if at all?
Please add your comments below.
Read Comments Here
By Aaron Straup Cope | May 9, 2013
Museums and the Web 2013 wrapped up a couple weeks ago. The Cooper-Hewitt won an award for the work we’ve done on the collections website this year, which was nice. I was also part of a panel about Humour as an Institutional Voice.
I asked Heather Champ to join me to talk about the subject because aside from having thought about creating and nurturing community based projects for a long time and having been (essentially) the public face of Flickr she is also the person responsible for enshrining the words
Don’t be creepy in a legal document.
Piotr Adamczyk is technically ex-still-sorta museum people having spent years at the Met and now managing the data side of things for the Google Art Project. The reason I asked Piotr is that I had the pleasure of seeing him speak at the National Digital Forum in New Zealand, last year, where he compared the Art Project to a Rachel Whiteread sculpture. Google, he said
can show you the shape of the inside of your institution.
Which is a fascinating way to think about what Google does and yet it is so rare that we ever hear big companies speak about themselves that way. Which is why I thought it would be good to have people from
business on the panel to talk about the tensions of putting forward a public face. Piotr had to pull out just before the conference and Dan Hon was gracious enough to fill in at the last minute.
Over breakfast the morning of the panel, Dan described some of the work that they do at Wieden+Kennedy as helping organizations
form an opinion about themselves which nicely sums up quite a lot of the issues latent in the panel’s subject. Dan also wrote the quote above and, more recently, a proper good essay on the tyranny of digital advertising. Most of the panel was a discussion between the three of us and the audience but I did a short talk to try and frame some of the issues around the idea of humour and institutional voice.
The talk itself is a re-telling and a stretching-out of a duet that Seb and I did in March at the ArtsTech Meetup in New York City.
At the time that we were putting together the slides for that talk I imagined somehow working in a piece I had just written about digital public spaces and measures of success for the Future Everything conference in Manchester. I still think about doing that some day and it’s still better that I’ve not tried but I’ve included the full text of that essay below if you’re interested.
This is what I said, instead:
Read Full Post Here.
By Sheila Brennan | May 9, 2013
As I watched the news on April 15 and thought about another April tragedy, at Virginia Tech, I wondered if it made sense to create an online collecting site. I have some experience building and managing online collecting sites/digital memory banks, now referred to as crowdsourced collections, at RRCHNM including the April 16 Archive. A few days after the shootings at Virginia Tech, I worked together with former RRCHNM programmer Kris Kelly to help VA Tech launch that site a few days after that tragedy to help them to respond, collect, and make public all of the memories and materials surrounding that dark time.
And then someone asked @CHNM on Twitter, if we were archiving the coverage of the Boston shootings.
As I considered the prospect of starting another unfunded collecting project in response to current events (see: Occupy Archive), I began to question if Internet users would still come to digital memory banks, as we know them. (Since the time I started drafting this post, we’ve learned that Northeastern is working on something.)
In 2013, sharing personal stories, photographs, generating memes, posting videos, is commonplace for many Americans. According to the Pew Internet and American Life survey, sixty-seven percent of Internet users use some type of social networking site. People are sharing quite a bit within their own networks, and within networks that have specific terms of service. Will they want to share again in another web space?
Don’t get me wrong, I still see value in the practice of collecting online and in building non-commercial, open resources that are filled with first-person accounts and reactions, and memories to tragic and celebratory events that individual contributors still own and maintain control over use. As Internet users access many different platforms and use the Web in more ways, people are much more comfortable sharing online with their own social networks. There are many places to react and emote immediately, as a result, there is much more noise on the Web. Finding a digital collecting site seems much more challenging. The question remains, how can we best save those reactions for historians and other researchers to access in the future? Conversely, should we try to save all of those reactions?
Read Full Post Here.
By John Stack | May 7, 2013
Through the development of a holistic digital proposition there is an opportunity to use the digital to deliver Tate’s mission to promote public understanding and enjoyment of British, modern and contemporary art. To achieve this, digital will need to become a dimension of everything that Tate does.
Through embracing digital activity and skills across the organisation Tate aims to use digital platforms and channels to provide rich content for existing and new audiences for art, to create and nurture an engaged arts community and to maximise the associated revenue opportunities. We will achieve this by embracing digital activity and developing digital skills across the organisation.
2. Digital principles
Tate’s audiences will have digital experiences that:
- increase their enjoyment and understanding of art
- provoke their thoughts and invite them to participate
- promote the gallery programme
- provide them with easy access to information
- entice them to explore deeper content
- encourage them to purchase products, join Tate and make donations
- present an elegant and functional interface whatever their device
- take place on the platforms and websites they use
- minimise any obstacles they may encounter
To achieve this, we will take an approach that is:
- audience-centred and insight-driven
- constantly evaluated and enhanced
- well designed and architected
- distributed across multiple platforms
- open and sharable
- sustainable and scalable
- centrally governed and devolved across the organisation
Given audience demand for high quality online arts content, Tate has an opportunity to become a more significant digital publisher.
Tate already has considerable strengths in this field, notably in publishing Tate Kids, Tate Shots, Tate Papers, Tate Etc. as well as the online collection and learning resources. We shall co-ordinate, extend and promote these activities to better serve our audiences, and intend to focus on the following four areas of new content production:
Read Full Statement Here.
By Alan Liu | May 7, 2013
In April, Dr. Alan Liu, Professor in the Department of English at the University of California, Santa Barbara, spoke about ways in which the skills and resources of the DH community can help advocate for the humanities.
Listen to podcast here.
By Daniel Allington | April 30, 2013
As we all know, the digital humanities are the next big thing. A couple of years ago, I gave a presentation at a digital humanities colloquium, explaining what I saw as the major reasons for this (Allington, 2011). We are working within an economic system in which owners of capital (funders) invest in research speculatively purchased in advance from the owners of the means of knowledge production (universities), with permanent employees of the latter (what North Americans call ‘faculty’) playing the role of brokers between the two (both as writers and as reviewers of grant applications) and managing the precariously-employed sellers of labour (junior academics and support staff on temporary contracts) who actually get things done. Humanities research is traditionally cheap, which is bad from at least two points of view: funders want to save money by administering fewer, larger, grants, while universities want to see every department generating research income on a par with that pulled in by STEM centres. The digital humanities come to the rescue by being so conveniently expensive: they appear not merely to profit from but to require such costly things as computer hardware, server space, and specialised technical support staff who – in a further benefit from the point of view of the ethically-indifferent university – can be employed on fixed-term contracts, instantly disposed of when the period of funding comes to an end, and almost as instantly replaced once the next grant is landed. It didn’t have to be like this: computers can as easily reduce as increase the size of a research project. In the funding game, however, the goal is not quality, nor even efficiency, but only bigger and bigger contracts. This is the context within which the digital humanities have fashioned themselves from their less tiresomely glamorous predecessor, ‘humanities computing’.
Read Full Post Here
By Forum guests Jeremy Douglass, Lev Manovich, Elijah Meeks, Michael Stamper, and Mia Ridge | April 25, 2013
In recent years, visualization has become an all-purpose technique for communicating and exploring data within the humanities. There are a wide availability of tools offering different points of entry from IBM’s Many Eyes to Gephi to Tapor 2.0. Projects like the Visual Thesaurus, Mapping the Republic of Letters, and Hypercities, among countless others, all engage with visualization as an integral part of their scholarship. Yet, they do so in very different ways and from a wide variety of disciplinary perspectives, leaving us to question: what is visualization in the humanities?
Why do we use it? How do we use it? And to what end?
This forum seeks to explore some of the key ideas and problems at stake in beginning to articulate an answer. Taking an expansive view of both visualization and the humanities, this forum will interrogate not only the ways in which tools are used, but also the different priorities and intersections of varying disciplines. Using four broad categories (case studies, tools, theory, and pedagogy) as a loose structure, we hope to encourage an open conversation the will speak to the growing use of visualization for research and teaching across the humanities.
Questions and Starting Points:
- Case studies: Why and how do you use visualization? What are some of the existing exemplary visualizations in humanities research? What do they offer and/do that makes them exemplary? What kind of work do they do that is different from other scholarship in the humanities?
- Tools: What visualization techniques and/or tools have you found helpful in creating visualizations? Do certain visualization techniques and/or tools have inherent limitations (practically or conceptually)?
- Theory: How might we combine what Franco Moretti has called “distant reading” with existing practices of close reading? How do we understand visualization theoretically and critically, in relation to other forms of past and present media?
- Pedagogy: What does visualization expect and/or offer pedagogically? In what ways is it a tool for understanding, communicating, and creating knowledge in the classroom?
Read Full Discussion Here