“No study of media is without its own politics” – Interview with Prof. Benjamin Peters on regimes of power in technological development, smart technologies, and Russian hackers

by Olga Dovbysh

Prof. Peters giving a keynote speech at the Aleksanteri Conference 2019. Photo by E.Gorbacheva

Benjamin Peters is the Hazel Rogers Associated Professor and Chair of Media Studies at the University of Tulsa as well as affiliated faculty at the Information Society Project at Yale Law School. Taking critical, historical, and global approaches, he investigates media change over regimes of time, space, technology, and power.

In October 2019 Peters was a keynote speaker at the Annual Aleksanteri Conference, where he addressed some issues on the role of the political regime in the development and successful release of technologies. This speech was inspired by Peters’ recent book How Not To Network a Nation: The Uneasy History of the Soviet Internet, published with MIT Press in 2016. For more, see benjaminpeters.org

How do regimes of power shape the process of technological development and implementation? 

The phrase “regimes of power” frames a fundamental question behind my research agenda: why and how do apparently similar technologies take shape differently in different contexts? Namely, how can focusing on the complex cultural, political, and economic forces already at work in the world improve our understanding of the causes and consequences of technology? Media scholars have long been studying questions of media and technological influence over the vital variables of time, space, and matter; by adding the phrase “regime of power,” I aim to underscore that no study of media is without its own politics: the script to our mediated globe is acted out on a stage populated by explicit and implicit political actors, including the history of political economy, cultural production, media theory, and many others.

How can we as researchers study the implications of regimes of power on technological development? In particular, what should scholars and students of media and communication take from this discussion?

It is often not hard to identify regimes of power—state borders, copyright empires, corporate reach, linguistic and cultural barriers, the variable imprints of history—in how technology and society coevolve together but it is often hard to identify the consequences of those regimes of power. Just as an example, in 2016, many liberals in the West were thoroughly convinced that the failures of American democracy could be ascribed to Russian hackers online, although subsequent analysis has suggested that the impact of these particular disinformation campaigns, while worrying, is questionable and likely very minimal. But perhaps the larger struggle comes not just in showing either great or limited influence but in understanding what forces are at work behind and follow from our favorite variables: my ongoing study of the history and present of transnational disinformation campaigns suggests that the forces behind the “Russian hacker” motif is neither hackers (but IT professionals), nor Russians (but globally mobile networks), nor even people operating online (but often very embodied site-specific actions). So perhaps the critic can best understand the implications of regimes of power not by either serving or critiquing them directly but by remaining ever humbled by and alert to the complex causes operating behind and consequences following from those regimes.   

In your keynote speech at the Aleksanteri Conference in 2019 you also linked the unsuccess history of Soviet Internet with challenges of today’s networked world. What are the main lessons to be learned from Soviet Internet case?

No networked world, especially our own, is a given. There is more contingency in the history of networks than any convenient grand network narrative (networks as liberators, connectors, democratizers, or even autocratic power concentrators). No design values will save us from institutional practices: no democratic, peer-to-peer design values can neutralize the creeping privatization of network power over the long haul. Consider, for example, the curious fact that, despite the opposite intentions, the US ARPANET, not the Soviet OGAS (All-State Automated System) Project that my book chronicles seeded the modern-day global surveillance state. Every national computer network is preceded by an international network of institutions, experimental collaborations, and scientific exchange. Neither genius, technical acumen, nor political foresight is enough to bring about a network revolution: network innovation requires collaborative, functioning institutions and often well-managed state subsidy.

Do you observe the examples of this contingency in the deployment of today’s networks, for instance, surveillance networks during COVID19 pandemic or networks of AI-driven tools in media?

In the pandemic different people have both more and less privacy. The mask has partly anonymized protestors and slowed the facial-recognition algorithms at the same time that the push for location-based contact-tracing apps has sped privacy erosion; students and faculty, who have pivoted to online schooling, now find our living spaces as the makeshift backdrops for mass communication. What are we to make of all this? Perhaps we can agree that the pandemic has layered in at least two new kinds of contingency into the analysis of our network society: first, there is the irreducible contingency that comes from properly understood variables associated with a new lethal disease, such as infection and contagion rates, K (or the burstiness of spread), population density, prior immunities and health vulnerabilities, demographic risks, weather and climate conditions, among others. The other type of contingency follows from the simple fact that no one knows exactly how all of these variables (if our list is “all” of them) interact: in other words, our attempts to try to model and understand these variables themselves reveal the contingency baked into network models. Unlike, say, the model of a fixed item like a bridge, public health models invariably feature networks with uneven linkages, dispersals, and dynamics. In other words, in the classic distinction of the things we know we do not know and the many more things we do not know that we do not know, how we attempt to model the relationship between those two is itself its an exercise in network contingency. As Yale physician and sociologist Nicholas Christakis suggests in Apollo’s Arrow, his new book on the coronavirus, we have much to learn from and prepare for in our ever uncertain and surprisingly verdant world.

Your current research takes a critical stance towards smart technologies. What are the main directions and objects of your critic?

Smart media are a pain. In a sentence, the word “smart” has a much older definition that, if taken seriously, would help us rethink our relationship to “smart technologies” (wifi-enabled devices like refrigerators, toasters, phones, cars, and cities): something that is smart is on the cutting edge of embodied pain (e.g. “ow, that smarts,” we might say while rubbing a bruise: the English smart shares a root with the German word for “pain,” Schmerz). In other words, smartness, in English, draws on a deeper understanding of our embodied experience of a cutting edge, sharpness, or pain. By reconceiving of the history of human efforts to extend intelligence to technology, in embodied sharp ways, we may also be able to critically resituate ourselves in less needlessly painful relations to our newly mediated environments.

Particularly, my next book project, which I am tentatively titling in my more alarmist moments The Computer is Not a Brain: How Smart Tech Lost the Cold War, Outsmarted the West, and Risks Ruining an Intelligent World, seeks to show how artificial intelligence and smart technologies have been variously embodied in painful settings—the uncanny valley of anthropomorphic bodies, the strategic fields of wartime ruin and waste, and the laboratories of competing conceptions of cybernetic and artificial intelligence, among others. The book, setting aside both contemporary and historical tech hubris (despite huge leaps in machine learning, the phrase “AI” still rarely rises above being a male self-reproduction fantasy wrapped in a call for investor capital), calls for a more modest, more sustainable environmental understanding of intelligence.

Your current academic writing also concerns Russian hackers. What are you interests in this area?

As I hinted at above, Marijeta Bozovic (Yale Slavic) and I are codeveloping a project on the men, machines, and myths that often pose behind the label of Russian hackers. In a project sure to disrupt all sorts of political sensitivities, we seek to better understand the political and epistemological limits of the stories told in the media about Russian hackers. We also seek diversify the image, add both color and sobriety to the greyscale dramatic scripts (white hat, black hat, grey hat), and add to the understanding of the global phenomenon of so-called Russian hackers.

If readers are interested in learning more, we are organizing a virtual symposium and serial conversation by invitation on the topic in 2020-2021: please feel free to contact me at ben-peters@utulsa.edu

Leave a comment

Your email address will not be published. Required fields are marked *