Recently, several folks on Twitter have noted their displeasure that the Tate appears to be linking to Wikipedia articles in lieu of authoring their own written biographies of artists represented in their collections.
I… actually don’t have a problem with what the Tate is doing.
Except for a few unique institutions founded around a single artist’s estate, very few art museums really have the authority, or, frankly, the mission, to be authorities on the biographies of the artist in their collections. It would be one thing if the Tate were deferring to Wikipedia articles about the unique objects within it’s collection. Bendor Grovesnor erroneously suggests that the Tate copying and pasting this for their collection catalog entries, but they are not. Instead, they’re using it for that most unsatisfying categories of copy expelled by art museums: the artist biography.
As a graduate student and curatorial fellow at the National Gallery of Art, I spent hours and hours of expert time drafting biographies of artists represented in that museum’s Dutch collections. This was almost always a secondary literature review (thank goodness, no responsible museum board will fund research trips to archives to write three-paragraph biographical blurbs!) I and my colleagues generated some quite rich and educational copy for the website, and it was a lovely learning experience… for us, the students.
However, except for the most minor artists, we were mostly just rewording and enriching well-covered biographies from the Benezit Dictionary of Artists or Grove® Art Online. Hours of expert research time was basically spent reinventing the wheel – something that absolutely did not have to be done for ridiculously well-biographied artists like Rembrandt. Any one of these hours could have been better applied researching and communicating what was unique to our museum: the specific objects in the collection itself.
Read the full post here.
From the resource:
Rapid Response Research (RRR) projects are quickly deployed scholarly interventions in pressing political, social, and cultural crises. Together, teams of researchers, technologists, librarians, faculty, and students can pool their existing skills and knowledges to make swift and thoughtful contributions through digital scholarship in these times of crisis. The temporality of a rapid response is relative and will vary depending on the situation, from a matter of days, to a week, or several weeks. Our model below is relevant to the variable timelines a situation might require, but it bears remembering that a crisis itself has an immediacy, and that RRR projects, accordingly, bring with them a pressure to respond with intensity and speed. Torn Apart/Separados is an example of RRR. While the recommendations below are based on RRR data narratives, many elements could be more broadly applicable to other types of RRR.
Read the full resource here.
We (Tim Causer, Kris Grint, Anna-Maria Sichani, and me!) have recently published an article in Digital Scholarship in the Humanities on the economics of crowdsourcing, reporting on the Transcribe Bentham project, which is formally published here:
Alack, due to our own economic situation, its behind a paywall there. Its also embargoed for two years in our institutional repository (!). But I’ve just been alerted to the fact that the license of this journal allows the author to put the “post-print on the authors personal website immediately”. Others publishing in DSH may also not be aware of this clause in the license!
So here it is, for free download, for you to grab and enjoy in PDF.
I’ll stick the abstract here. It will help people find it!
In recent years, important research on crowdsourcing in the cultural heritage sector has been published, dealing with topics such as the quantity of contributions made by volunteers, the motivations of those who participate in such projects, the design and establishment of crowdsourcing initiatives, and their public engagement value. This article addresses a gap in the literature, and seeks to answer two key questions in relation to crowdsourced transcription: (1) whether volunteers’ contributions are of a high enough standard for creating a publicly accessible database, and for use in scholarly research; and (2) if crowdsourced transcription makes economic sense, and if the investment in launching and running such a project can ever pay off. In doing so, this article takes the award-winning crowdsourced transcription initiative, Transcribe Bentham, which began in 2010, as its case study. It examines a large data set, namely, 4,364 checked and approved transcripts submitted by volunteers between 1 October 2012 and 27 June 2014. These data include metrics such as the time taken to check and approve each transcript, and the number of alterations made to the transcript by Transcribe Bentham staff. These data are then used to evaluate the long-term cost-effectiveness of the initiative, and its potential impact upon the ongoing production of The Collected Works of Jeremy Bentham at UCL. Finally, the article proposes more general points about successfully planning humanities crowdsourcing projects, and provides a framework in which both the quality of their outputs and the efficiencies of their cost structures can be evaluated.
Read the full piece here.