Criminology lecturers, Max Hart, Kyla Bavin and Adam Lynes explore within their research how technology, power and inequality shape contemporary harm. Adam gives us background on this and the research they are conducting.
AI in media
In 1984, The Terminator introduced the world to a chilling vision of the future in which Skynet, an AI system, commits a nuclear holocaust to destroy human life, which it saw as a threat. John Connor's human resistance fights Skynet's powerful cyborgs and machines in a desolate, war-torn world. This vision has remained culturally relevant, symbolising deep-rooted fears around AI’s potential dangers as its sophistication grows.
Just fifteen years later, The Matrix (1999) offered a different kind of dystopia. Here, artificial intelligence does not destroy humanity, it subdues it. Humans unwittingly exist within a simulated reality, tranquillised by illusion as their bodies are harvested for energy. While The Terminator threatened with the violence of AI, The Matrix disclosed its ability to dominate through comfort and distraction.
By the mid-2010s, big data and machine learning had pushed AI from the realm of science fiction to everyday reality. The terror depicted in such movies began to look less like a fantasy and more like a metaphor. Today's AI does not engage in wars or build simulated worlds (at least not in the conventional sense). However, it does something equally profound: it shapes how we work, communicate, and even think, often in ways that reinforce the same hierarchies and inequalities that already govern society.
AI and Criminology
It’s easy to be drawn in by the spectacular and sublime representations of artificial intelligence, the kind that promises big revelations but often hides the real issues underneath. When we focus too much on imagined AI disasters, we risk overlooking the quieter everyday harms already happening around us.
Criminology has been slow to offer proactive, nuanced responses to our increasingly digitised world. The once-clear boundary between the “online” and “offline”, which has shaped much of criminological thought, especially in cybercrime research, has all but dissolved.
Our recent paper in Critical Criminology tackles this challenge by cutting through the myths surrounding AI and focusing on how it actually affects people, work and society. We also call for criminology to engage more deeply with the digital conditions shaping contemporary harm.
When used correctly, criminology can be an important tool for interrogating the often overlooked relationship between power, inequality and harm in all its forms. In this vein, we can already begin to see how AI is changing how people live and work, from who gets hired to how police make decisions. By studying these changes, criminologists can reveal how technology reinforces social inequality and control and ask how we might build fairer, more ethical systems.
Criminology course
Find out more about our Criminology course.
Our research
We argue that AI is not dangerous because it might “take over”, it is dangerous because of who controls it and what it is used for. Big companies design these systems to make work faster and cheaper, but that often means workers lose freedom, fairness, and even their sense of purpose.
Artificial intelligence operates not through open rebellion, but quiet governance, shaping everyday life, especially in the workplace, where it determines how we earn and who we are. As we speculate about a world where machines take power, we miss the truth that power has already been automated - tightened in the hands of technocrats who pull the strings (Lynes et al., 2024).
In our analysis and research, we propose a new typology of technologically mediated harms:
- We begin with Datafication Harm: the transformation of workers into streams of data. Every keystroke, delivery, or interaction becomes measurable, turning human activity into a resource to be mined. As the paper notes, “Reducing workers to data points, AI consequently dehumanises labour, stripping away autonomy and embedding control in the algorithmic fabric of the workplace”.
- Next comes Algorithmic Governance Harm, where decision-making is automated and opaque. Recruitment systems sort applicants using AI-trained models that "learn" from past biases, writing inequality into code. These technologies extend what Kotzé (2020) calls instrumental special liberty - the freedom powerful people and corporations have to act without consequence. Simply, AI allows elites to make decisions and exploit workers while avoiding responsibility, hiding their actions behind algorithms.
- Operational Harms are felt in the daily strain of algorithmic management. Gig-economy drivers, call-centre staff, and care workers describe being “optimised” out of autonomy - rewarded or penalised by unseen data patterns that dictate pace, breaks, and even tone of voice. In this system, control flows downward from corporate elites who design and profit from these technologies, yet also inward, as workers internalise AI’s competitive logic. They begin to monitor themselves and outperform one another, turning discipline into self-surveillance. The result is not only vertical harm inflicted from above, but horizontal harm - based on everyday contact where co-workers become tacit competitors in an algorithmic battle for survival.
- Existential Harm captures the deeper emotional and social cost of AI. Automation is quickly replacing and devaluing human work. Not only do workers lose their jobs but also their sense of place and purpose. Up to as many as over 40% of jobs across the world are now under threat of replacement by AI, reports the IMF (Georgieva, 2024). In performing elite interests, AI may provide efficiency but at the cost of hollowing out the very social fabric on which labour and legitimacy are based.
AI and control
AI has quickly become a new tool for those in power, prioritising efficiency and profit over fairness, autonomy and human meaning. But the very harms it creates; datafication, algorithmic control, everyday workplace pressures, and the loss of purpose, also risk undermining the system itself. As exploitation grows and trust breaks down, the foundations of this may weaken.
For criminology, these issues go to the heart of what the discipline is about: exposing hidden harms and questioning systems of power. As AI spreads its reach further and into the domains of policing and social media, it will reshape how justice, safety and even truth are understood.
With that in mind, we will be tracing AI’s movements from the workplace and into other areas of society, with our next research examining AI’s role in policing and social media. Criminology can provide the means to see beyond the sensationalist headlines and to begin to see how technology constructs power, justice and the everyday world. By examining such topics, criminologists can strive to construct a more equal, safe, and just world - one that uses technology to benefit humanity, rather than be its master.
References
Cameron, J. (Director). (1984). The Terminator [Film]. Orion Pictures.
Georgieva, K. (2024). AI will transform the global economy. Let’s make sure it benefits humanity. IMF Blog, 14th January. https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity
Kotzé, J (2024). ‘On Special Liberty and the Motivation to Harm’, The British Journal of Criminology, 65 (2): 314–327.
Lynes, A., Treadwell, J. & Bavin, K. (2024). Crime of the Powerful and the Contemporary Condition: The Democratic Republic of Capitalism. Bristol: Policy Press.
Wachowski, L., & Wachowski, L. (Directors). (1999). The Matrix [Film]. Warner Bros. Pictures.