I am using my own research (and how I came to this research) as a trajectory for launching students into their own research. Below is my prezi; please feel free to use, adapt, alter accordingly to your own needs.
Understanding the basics of hash functions is important, because it is the underpinning for all content integrity and authentication tools, like digital signatures and security certificates.
By Jon Bruner
Close to 40 million Americans move from one home to another every year. Click anywhere on the map below: blue counties send more migrants to the selected county than they take; red counties take more than they send. More about the map.
This week I twice taught a two-hour workshop introducing Emory people (students, faculty, and staff) to the very basics of HTML & CSS. The workshop was called How a Website is Born: The Very Basics of HTML & CSS.
In my first post on this subject, I poked a bit at how one might represent TEI in HTML without discarding the text model from the TEI document. Now I want to talk a bit more about that model, and the theory behind it.
A collaboratively produced introduction to the field of Digital Humanities. The guide is a project of the CUNY Digital Humanities Initiative (DHI), a new working group aimed at building connections and community among those at CUNY who are – or would like to be – applying digital technologies to research and pedagogy in the humanities.
This group discusses digital curation, which the Digital Curation Centre defines as “maintaining, preserving and adding value to digital research data throughout its lifecycle.”
You can help a class group project from the MLS program at the University of Maryland with the info needs of digital humanities by taking this brief (16 question) survey
The Maryland Institute for Technology in the Humanities (MITH) has received a major collection of electronic literature and vintage computer hardware from pioneering hypertext author Bill Bly.
By Scott B. Weingart
What I really want to highlight, though, is a brand new feature introduced by Wallace Hooper: automated Latent Semantic Analysis (LSA) of the entire corpus. For those who are not familiar with it, LSA is somewhat similar LDA, the algorithm driving the increasingly popular Topic Models used in Digital Humanities. They both have their strengths and weaknesses, but essentially what they do is show how documents and terms relate to one another.