Applying Archival Knowledge to Born-Digital Cultures with James Hodges
MARA Webinar - Wed Jan 31, 2024 12:30pm – 1:30pm (PST)


Published: January 19, 2024 by Andy Wiegert

James A. Hodges studies the history of computing and software interfaces, with a particular interest in digital archives and preservation. He is currently Assistant Professor in the School of Information at San José State University, as well as Senior Fellow in the Andrew W. Mellon Society of Fellows in Critical Bibliography at Rare Book School (University of Virginia), and Senior Book Reviews Editor for Information & Culture journal.

This talk will examine the value of archival knowledge in crafting research methodologies for the study of digital cultures. By bringing archival preservation and historical analysis into closer conversation with one another, Hodges highlights new media formats of growing concern to practitioners, as well as methods for analysis with underutilized value to researchers. Case studies are drawn from several overlapping projects, including Hodges’ work on computer history, algorithmic accountability, and public health.

We are thrilled to have Dr. James Hodges give his talk, Applying Archival Knowledge to Born-Digital Cultures, on Wednesday January 31st. Please join us at: 

Session Link:
Passcode: 789599

In advance of his talk, please enjoy these questions that Dr. Hodges answered for the MARA program.


What emerging technology in the field of research methodologies is most interesting or seems the most useful to you?

I’m very intrigued by the continued development of graphical interfaces for analyses that previously required users to write code. Forensics toolkits like BitCurator and Digital Humanities suites like Voyant Tools are two exciting examples. Of course, it’s still valuable to learn the scripting languages and shell commands that these tools build on, just like it’s still valuable to learn long division even though we have digital calculators– but it’s nice to save time and to make the techniques more accessible.


What new media formats appear to be the most problematic?

In the digital preservation and forensics fields, solid state storage devices present a number of challenges to older techniques for recovering or authenticating data. USB keys and solid-state hard drives are going to keep archivists busy for a while, I suspect.


What was the impetus that inspired you to pursue these types of archival and research efforts in new media?

Growing up, my dad was a computer programmer for AT&T. He would often bring home old computers that were getting thrown away at his office, and we would tinker on them together. When I got to college in the early 2000s, I saw digital media studies growing in academic significance, but the number of libraries and archives offering access to historical computer hardware or software was somewhat limited. It seemed to me that there would be a need for someone to improve that situation, so I decided to start applying my early experiences with outdated computers to the academic field of digital preservation.


Can you describe your most interesting or challenging project to date regarding born-digital cultures?

My current book project uses digital forensics to uncover the philosophical ideas that hold early home computer software together, many of which were drawn from the 1960s counterculture. I really enjoy the project because it connects my current research with my memories of my dad, who started his programming journey during the counterculture era and passed away in 2013. The material is endlessly interesting to me, but writing a book manuscript is also challenging because it’s such a large undertaking.


As new born-digital media technologies advance, what do you see as the most challenging hurdles to overcome?

There’s always a lag between when technologies are developed and adopted, on one hand, and when we’re ready to historicize them in libraries and archives, on the other. Right now the sheer volume of data being stored on our devices is skyrocketing, so archivists are facing a lot of pressure to employ AI and automation as a way to speed up our processing. This creates heavy demands on the training pipeline, both for emerging and established practitioners.


Regarding your career trajectory thus far, would you say that it has all been related in some fashion, or have you changed interests along the way?

My core interest in digital preservation hasn’t changed much, but the specific application has changed a lot. I think it’s very valuable to stay responsive to the landscape of opportunities in front of you as it changes. 

When I first started my graduate studies, I was highly focused on video game preservation because I saw the games industry making massive growth. Lots of universities were starting game design programs too. Over time working in the field, however, I found that while student interest in game studies was sky-high, it wasn’t necessarily being reflected in funding or hiring trends quite the way I expected. Luckily, this hasn’t really been a problem for me because I always saw digital games as just one kind of digital media among many equally interesting others. I’m similarly interested in picking digital objects apart to see how they’re made whether they’re games, images, PDF files, or some kind of spreadsheet.


What are some of your favorite objects or material culture that relate to the history of computing?

One of my specialties over the years has turned out to be comparing multiple copies of the ’same’ digital object. A few years ago I worked with a colleague at UT Austin to compare the JPEG compression signatures on several copies of the same meme. In the process, we ended up identifying a network of coordinated Twitter accounts pushing a disinformation narrative. That was pretty exciting because it showed the value of applying authentication concepts from the archives world to current events.


Since the material that librarians and archivists are encountering is increasingly born-digital, how can the risk of digital decay and data degradation best be mitigated?

There’s an initiative at Stanford called “LOCKSS,” which stands for “Lots of Copies Keeps Stuff Safe.” I think about that phrase a lot. There are limits to how much data we can realistically store, of course, but overall I think the concept is a good one. You should never rely on a single authoritative copy to exist forever.


How do you see the worlds of MLIS and MARA overlapping?

At the end of the day, we’re all interested in preserving knowledge and documentation for future use. The settings may be different, but the goals are pretty similar.


Do you see the Algorithmic Accountability act of 2023 as a significant step forward, or does it not go far enough, given how quickly AI appears to be advancing?

Should AI companies do more to assess the impact of their technologies and go further to educate the consumer?

I’m glad that a few elected officials are trying to mitigate the harms of bias in AI systems, because bias has a real impact on crucial fields like housing and education, but I don’t think it goes far enough. I don’t really trust AI companies to regulate themselves or educate the public. This is where some of the reverse-engineering research that I’ve done comes into play– we can’t trust these companies to accurately represent what their systems do, so it’s very valuable to figure it out ourselves if we ever want to hold them accountable.


Post new comment