An ongoing debate is the epistemological stakes of computational methods in humanistic inquiry. What kind of evidence is a word embedding or face detection and what can it tell us? How do we account for nuances across cultural, temporal, and geographical frames when engaging in pattern recognition and identifying outliers? To what degree does the analysis of words and images through computational methods such as topic modeling and object detection reduce and capture the complexity of meaning-making in media such as literature and film? These questions animate a range of fields, garnering attention from popular higher education news outlets like The Chronicle of Higher Education, and they are also the subject of entire books (Wasielewski; Underwood; Arnold and Tilton, Distant Viewing).
Disciplines in the humanities have forged paths toward engaging with computational evidence that at times have been in parallel, intersected, and moved in opposite directions. Among the most well-lit roads, the discipline of literary studies has grappled with fears of a return to formalism, a search for form and structure that describes the elements of a work without accounting for cultural and historical contingencies.