As earlier reports on historians’ use of technology demonstrated, most historians are gathering materials, analyzing their findings, and writing their scholarship in digital form. Curiously, however, a national survey in fall 2015 found that much of the profession remains skeptical about the value of disseminating their scholarship electronically (aside from digital versions of their print publications). As of 2015, 26% of historians had reportedly published their work online (which was up substantially from the 20% among respondents in a similar 2010 survey), but the share with publications was less than half the share (58%) of historians who reported they had considered publishing their work online. (The latter was essentially the same share as in 2010.) Among those who had not published something online, this ambivalence appeared to arise from two principal sources—personal doubts about the value of this form of work, and a larger sense that there is little professional appreciation or credit for this form of work.
Read the full post here.
Led by Unicode Consortium member Michel Suignard, the proposed Hieroglyphs will add over 2,000 new glyphs to the current Unicode standards. It will also provide greater global standardization and ease of use for Egyptologists through a searchable Hieroglyphs database. Over 2,000 new Hieroglyphs may soon be available for use on cell phones, computers, and other digital devices. The Unicode Consortium recently released a revised draft of standards for encoding Egyptian Hieroglyphs. If approved, the available Hieroglyphs will provide greater access and global uniformity for Egyptologists, covering a much longer period of Hieroglyphic usage than ever before. The proposal is part of a larger effort between the Unicode Consortium, ancient linguists, font designers, and the federal government to attempt to study, preserve, and then digitally represent ancient and endangered languages through the use of computer code.
Read more here.
Editors Note: This is the second post in a two-part post exploring a digital history course taught at Carleton University in Winter 2018. Part one explains the premise behind #hist3812.
In part one, Graham explained the rationale and unfurling of HIST3812, Critical Making in Digital History. At the end of the course, he invited the students to craft a collaboratively written ‘exit ticket’ that explored their understanding of what the course accomplished. This exit ticket was not graded, although the students could incorporate it into their end-of-term portfolio of work.
The exit ticket was written on the final day of class (a 1.5 hr block of time) through a student-directed discussion and division of labour on an open Google document. Graham prepared the shell of the document before hand with suggested headers (which the students left largely intact). Graham observed the discussion, but periodically left the classroom, so that the students could discuss issues openly without him.
Read part one here and part two here.
This is the transcript of the talk I gave this evening at the CUNY Graduate Center.
…I do want to talk a little bit this evening about the work I’ve been doing as a Spencer Fellow. That’s not what it says in my title and abstract, I recognize. And that’s the curse of making up titles and abstracts in advance: sometimes you sit down to prepare a talk and realize you really want to say something else entirely, and so your task becomes trying to thread things all together so that no one who shows up expressly to hear you expand on the ideas advertised on the flyer is too frustrated or disappointed…
I love old teaching machines, yes even BF Skinner’s teaching machines, despite their deeply problematic usage. I love them, in part, because they are objects. These objects carry a history. They reflect an ideology. They have substance. They have weight – literally, culturally, intellectually, politically. They are material artifacts, and we can talk about how they were made, how they were manufactured. And that seems particularly important, that materiality. It helps us see design and functionality and production and history and even ideology in ways that I think today’s digital teaching machines and teaching machine-makers are more than happy to obscure.
Read full post here.
Two Directions in AI
In the inaugural issue of AI & Society, published in 1987, Ajit Narayanan identified two directions that propelled the discipline of artificial intelligence. The first was “Implement and be damned” whereby programs are produced to replicate tasks performed by humans with relevant expertise (p. 60). Motivated by efficiency, these programs might only tangentially be identified as AI, Narayanan noted, because, rather than adhering to certain computing principles, they might simply be written in a particular programming language associated with AI. (See, for example, Lisp.) The second direction was “We’re working on it,” which he associated with “grandiose claims” about the future of AI systems that “‘could control a nuclear power station’” or “‘shield us from incoming missiles’.” But both directions in AI shared the same dangers, according to Narayanan: an economic imperative that would further displace the care of humans for that of profit and a misplaced belief in the power of computation to solve problems more accurately than humans, perhaps even perfectly. To combat these dangers, he pointed to the importance of accountability to the general public; for, “as long as AI is removed from the domain of ordinary people, AI will remain unaccountable for whatever products it produces” (p. 61).
In the three decades since Narayanan made his argument, much has changed, with ordinary people being dialed into the everyday relevance of AI, as well as its potential for transformative societal effects. In addition to the near constant heralding of the practical benefits of AI on college campuses, to the aging, in music streaming, and with transportation, AI has also been celebrated for its potential in creative endeavors in IBM’s Watson advertisements that have featured Bob Dylan and Stephen King. (Much-needed parody of Dylan’s ad is available here.) And although such celebration may be premature, the success of Google’s AlphaGo points to the very real possibility of strategic, quotidian invention on the part of AI.
Read the full post here.
[At Jeff McClurken’s invitation, I was recently part of a panel focused on reviewing digital history at Organization of American Historian’s annual meeting. My portion of the discussion was to focus on reviewing digital public history projects, which have their own particularities that make them different that some other genres of digital history. I welcomed the opportunity because I think that the work of review is one of the most important of an historian’s professional obligations. Below is a version of my comments.]
A generous and conscientious review process at a crucial stage can make the difference between a mediocre project and a great one. And, a careful review after a project launches can be an essential authorizing element for that work and the people who produced it.
The work of a reviewing is an act of leadership in the field—-in digital history, in academic history, in public history. As such, we would do well to consider the qualities that seek in effective leaders before we turn to the form and content of an effective review.
We seek out leaders
- who prize collaboration and cooperation;
- who have vision, but make room for other voices;
- who honor many types of experience and expertise;
- who acknowledge the important contributions of others;
- who clearly admit that they do not have all the answers.
Individuals who embody these qualities often stand out as the people we turn to help us move our work forward. They are people we trust. I would submit, that these are also the people we want to review our work.
We can and should do our best to create a culture of reviewing that is humane and constructive. In that effort we might turn the groundbreaking work of the HuMetricsHSS Project to help structure our thinking. The project is working through a process to create and disseminate a “humane evaluation framework” that builds upon the values that participants have identified as central to humanities and social science disciplines, including collegiality, quality, equality, openness, and community.
Read the full post here.
If you know me, the topic of this first post may come as unsurprising but also a bit eyebrow-raising. “Sharon, you’ve been working on the Old Bailey Online project (OBO) since forever. Aren’t you bored with it yet?”
Meanwhile, those who don’t know me might more likely be asking, “What are the Old Bailey Proceedings?” So, a bit of background. The Old Bailey Proceedings is the name most commonly given to a series of trial reports that were published from 1674-1913.
the largest body of texts detailing the lives of non-elite people ever published, containing 197,745 criminal trials held at London’s central criminal court.
I’ve been project manager for this and a number of spin-off projects (which I’ll undoubtedly write about in future posts; brace yourselves) since 2006. And yet I only started to really dig into the Proceedings data quite recently. This is because it consists of more than 2000 intimidatingly complicated XML files, reflecting the complexity of a criminal trial – there can be multiple defendants, multiple charged offences and multiple outcomes. The central aim of the project from its conception was to accurately represent this complexity as well as provide searchable full text.
In fact, I spent several years cheerfully encouraging others to use our data while I had no real idea how to go about doing so myself. In 2011, the project released the Old Bailey API, and I started to tinker with that, but I didn’t really get very far, until a couple of years ago I finally bought myself a book on XQuery and got down to it. And then I started to discover exactly how complicated the XML is. (So I’ve also been thinking about ways to make it more accessible; putting the XML files on our institutional repository is a good start but it’s really just a start.)
There are two facets to the the Proceedings data: firstly, structured markup that enables searching and quantitative analysis of many aspects of the published trials (especially the characteristics of defendants, offences, jury verdicts, sentences); and secondly, the full text of the reports, amounting to more than 125 million words in total.
My first two posts are tasters of a few of the structured data categories: here, I’ll look at how offences tried at the Old Bailey changed over the 250 years documented in the Proceedings; in the second post, I’ll look at defendants’ gender and offending. In subsequent posts I’ll start to explore the text of trial reports.
Read the full post here.
Last year, I wrote about my early impressions of the possible uses of virtual reality technology for public history and history education. I also led a session in my fourth-year digital history class on virtual reality and its potential for generating a sense of historical presence, an ability to simulate the sensation of standing in past places. I have been somewhat enthusiastic about what this technology can add to museums, classrooms, and other settings for public history and history education.
My focus last year was on smartphone-based VR with stereoscopic viewers (Google Cardboard, Daydream View, Gear VR). This type of VR technology can generate a powerful sense of presence, but the user is limited to rotational movement along three perpendicular axes (pitch, roll, yaw). This is like being a camera fixed in space that can spin around, but cannot move within that space. Tethered VR headsets that use PCs and spatial tracking systems add translational movement (heave, sway, surge) to VR experiences creating six degrees of freedom of movement. These headsets also include tracked motion controllers that can reveal the user’s hand movements in VR environments and enable interaction with 3D objects. Altogether, this is sometimes called “room-scale VR.” The experience is incredibly immersive.
Recently, I put this kind of immersive VR experience to the test by reviewing three examples of public history VR projects that use room-scale technologies. I used an Acer Mixed Reality Headset, part of Microsoft’s line of virtual reality headsets that use “inside-out tracking” in order to achieve room-scale experiences. Two cameras on the front of the headset map and track the environment around me and the motion controllers generate allow me to interact with objects in a 3D space.
What did this add to VR experiences for public history and history education? How best could it be used? What are its limitations? Let’s find out:
Read the full post here.
Open Source and copyright are intimately related. It was Richard Stallman’s clever hack of copyright law that created the General Public License (GPL) and, thus, free software. The GPL requires those who copy or modify software released under it to pass on the four freedoms. If they don’t, they break the terms of the GPL and lose legal protection for their copies and modifications. In other words, the harsh penalties for copyright infringement are used to ensure that people can share freely.
Despite the use of copyright law to police the GPL and all the other open source licenses, copyright is not usually so benign. That’s not surprising: copyright is an intellectual monopoly. In general, it seeks to prevent sharing—not to promote it. As a result, the ambitions of the copyright industry tend to work against the aspirations of the Open Source world.
Read the post here.