Ground truth tracings (GTT): On the epistemic limits of machine learning

Edward B. Kang

Research output: Contribution to journalArticlepeer-review


There is a gap in existing critical scholarship that engages with the ways in which current “machine listening” or voice analytics/biometric systems intersect with the technical specificities of machine learning. This article examines the sociotechnical assemblage of machine learning techniques, practices, and cultures that underlie these technologies. After engaging with various practitioners working in companies that develop machine listening systems, ranging from CEOs, machine learning engineers, data scientists, and business analysts, among others, I bring attention to the centrality of “learnability” as a malleable conceptual framework that bends according to various “ground-truthing” practices in formalizing certain listening-based prediction tasks for machine learning. In response, I introduce a process I call Ground Truth Tracings to examine the various ontological translations that occur in training a machine to “learn to listen.” Ultimately, by further examining this notion of learnability through the aperture of power, I take insights acquired through my fieldwork in the machine listening industry and propose a strategically reductive heuristic through which the epistemological and ethical soundness of machine learning, writ large, can be contemplated.

Original languageEnglish (US)
JournalBig Data and Society
Issue number1
StatePublished - Jan 1 2023


  • ML epistemology
  • Machine learning
  • critical study of AI
  • ground truth
  • machine listening

ASJC Scopus subject areas

  • Information Systems
  • Communication
  • Computer Science Applications
  • Information Systems and Management
  • Library and Information Sciences


Dive into the research topics of 'Ground truth tracings (GTT): On the epistemic limits of machine learning'. Together they form a unique fingerprint.

Cite this