Artificial intelligence

Toast on pink background with AI written in Marmite

20 January 2026

As artificial intelligence weaves itself into our daily lives, the excitement is often matched by anxiety. Headlines highlight worries about changing work landscapes, academic misconduct, and creative decline. But how much of this panic is warranted?

AI seems to be everywhere: in classrooms, offices, studios, and even in our pockets. Just five years ago, the idea that AI could write good poetry seemed far-fetched. Yet in 2024, a Guardian poll found that many people preferred AI poetry to the human kind.

Asking someone today, “Do you use AI?” feels like asking, “Do you use a computer?” – so why do we still feel so conflicted?

In this article, we explore research emerging from Magdalen on AI and confront some of the biggest fears surrounding its rise. Will AI replace teachers, steal jobs, and kill creativity? Are we doomed to become cogs in the machine while AI automates our workflows? Let’s find out.

This article wasn’t written by AI. But could you tell if it was?

AI in the workplace: productivity or control

With every technological advancement, fears arise that human labour will inevitably be replaced; whether by the mechanical loom, calculators, computers, or now, AI.

In the 1780s, the introduction of the power loom displaced thousands of skilled weavers, sparking anxiety about disappearing livelihoods. Then, in the 1930s, the industrial boom that followed World War I led to mechanised mass production, prompting economist John Maynard Keynes to reflect on the possibility of “technological unemployment”. He wrote that “the rapidity of these changes is hurting us and bringing difficult problems to solve,” defining technological unemployment as “the discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.”

Is this not almost word-for-word the same fear voiced by some about the rise of AI in the workplace today? That AI’s ability to perform certain tasks faster and more efficiently than humans will render us obsolete in those areas? Copywriters: gone. Data analysts: gone. Customer service representatives, bookkeepers, personal assistants: all gone. Alexa, ChatGPT, and Copilot are already on the front lines of these changes.

Professor Jeremias Adams-Prassl, Magdalen Law Fellow and Associate Dean of the Faculty of Law, first became interested in AI at work when writing about the gig economy. In his prize-winning book Humans as a Service: The Promise and Perils of Work in the Gig Economy, he asked: ‘What if your boss was an algorithm?’ That, it turns out, is far more significant a question than traditional concerns about job loss.

Indeed, a few years on, the challenge is no longer limited to platform work. In a recent paper on the Economic Incentives, Legal Challenges, and the Rise of Artificial Intelligence at Work, Professor Adams-Prassl notes: “The predictions seem clear: given the exponential growth of machine learning and artificial intelligence, the gig economy is but a transitional phenomenon, with the majority of low-skill platform-based work soon to be handed over to algorithms and robots. With the advent of self-driving cars and laundry robots, emerging business models will leave large swathes of the workforce unemployed.”

This is the same labour anxiety that has gripped societies with every technological leap. And yet, as Adams-Prassl notes, “we could not be further from Keynes’ vision […] What happened? Why are we still at work?” He points to the limits of automation, particularly machines’ inability to understand nuance. Even at what Jeremias Adams-Prassl terms “the precarious end of the labour market,” human intuition, judgement, and flexibility remain indispensable.

Out of the 14 misclassified essays, 10 were AI-generated but mistakenly identified as human-authored.

However, that doesn’t mean AI will leave the workplace unchanged. On the contrary, Adams-Prassl argues that AI is already transforming how work is managed. Right now, AI in the labour market is primarily used to organise, monitor, and evaluate work – rather than do it. In short: AI is not replacing you – it’s becoming your manager. Automated systems decide who gets hired, how tasks are allocated, and when they are to be completed. Employers across the socio-economic spectrum increasingly use AI systems to track productivity, give feedback and trigger warnings, reviews, and even terminations. Several multinationals have all reported using algorithmic support in work functions.

It sounds unsettling, doesn’t it? Do you feel like a Luddite, wanting to smash the digital loom before it replaces you?

For Adams-Prassl, that’s not what we should be worried about. Mass unemployment isn’t the real threat of AI. Instead, AI promises a fundamental reshaping of the structure and experience of work. A shift that has real impact on employment rights, employer accountability, and power dynamics in the workplace.
This isn’t a question of whether we should be using AI – it’s how, and who it serves. When algorithms are used, not to replace work but to control it – deciding who works, how, and under what conditions – we’re not only facing a technological shift, but a shift in power.

If AI is your manager, who manages the AI?

AI written in Brussel sprouts

AI in Education: change or cheating?

Along with the fear that AI will replace you at work, there’s growing anxiety that it might gut the education system. Picture this: essays written by AI, marked by AI, grades handed down by a chatbot, and students floating through degrees without reading a single book.

It sounds like dystopia. But how close are we, really?

At Oxford, the current stance is clear: AI can be a support, not a substitute. The University is fine with it, as long as it’s used ethically and appropriately: only when permitted, with information verified, and usage disclosed. Many universities are adopting a similar approach to this – cautious, but curious.

But can ChatGPT actually hold its own against human undergraduates? Is it possible to get the bot to do your degree for you?

Dr Tom Revell, a Lecturer in English at Magdalen, recently co-authored a study testing whether ChatGPT could genuinely compete with undergraduate students called: ChatGPT versus Human Essayists: An Exploration of the Impact of Artificial Intelligence for Authorship and Academic Integrity in the Humanities

Revell and the research team compared AI-generated essays with those written by Oxford undergraduates, assessing structure, depth, and literary insight.

Surprisingly, the AI essays didn’t perform too badly. They averaged a score of 60.46, while the human-written ones achieved 63.57. What they lacked was a nuanced understanding of cultural context and secondary criticism. The AI essays “described rather than analysed,” missing depth in the evaluation of poetic features. They look good, but lack substance.

In other words; the AI can write like a student, but it cannot think like one. While it mimics the form of academic writing, underneath the thin veneer of polish is an empty vacuum; it can’t replicate human critical judgement. One marker described the AI essays as ‘vague, descriptive, well-structured and sophisticated… but lacking in attention to detail’.

Perhaps more concerning was how the essays fooled human readers. Whilst the study notes “a remarkably high-level of correct identification”, it also reported that “out of the 14 misclassified essays, 10 were AI-generated but mistakenly identified as human-authored.” The researchers also found that AI-authored essays had similar weaknesses to weaker students. These findings highlight the blurring boundaries between human and machine writing, boundaries that will only become more indistinct as the technology improves, and continue to raise ethical questions about authorship and assessment in an AI-saturated environment.

The study recommends shifting marking rubrics to focus less on presentation and more on higher-order thinking. This change, the researchers note, would not only distinguish human work more clearly from AI outputs, but also raise standards for weaker student essays that “mask a lack of evaluation of poetic style and effect with sophisticated terminology and historical context.”

Although detection tools will likely improve, Revell warns that “entering an arms race with generative AI-outputs seems futile.” As AI continues to improve, it is possible that it may begin to perform better than human undergraduates, and simply trying to keep up with it will prove a Herculean task.

During testing, the team found that while ChatGPT often gave incorrect answers or misidentified poems, a knowledgeable user could prompt it effectively to interrogate their own ideas. In some cases, “ChatGPT’s perspective was impressively intriguing”. What makes this possible is the prompter’s in-depth subject knowledge. In short: students who have done the learning could use ChatGPT to elevate their work, and maybe uncover perspectives they hadn’t yet considered.

At Oxford, it appears to be much harder to get the AI to do your degree for you. The tutorial system means that even if ChatGPT is writing your essay, you still have to know enough to defend it 1:1 with your tutor. This may not be the case where class sizes are larger, and the tutor/student relationship happens at more of a distance. AI could help you organise your thoughts, clarify your argument, and provide access for many who may have more difficulty putting pen to paper.

When used wisely, AI might become a study partner instead of a shortcut. During testing, ChatGPT made plenty of errors like misidentifying poems, misquoting sources. But with the right prompts, it revealed intriguingly original takes. Not because the machine was insightful, but because the human prompting it was.
That’s the point. AI can elevate thinking – but only when there’s real thinking to elevate.

Revell’s study reveals that while generative AI tools like ChatGPT can closely imitate the form of student work, they still fall short of replicating human thought.

As AI continues to evolve, so too must our approach to teaching and assessment. Rather than resisting these technologies, universities are being called to adapt – by designing assessments that value originality and judgment, and by exploring how AI can support meaningful learning rather than undermine it.

If AI can write like it knows something, then we have to rethink how we recognise when a person truly does.

The spectre of a Terminator or Matrix style AI revolution looms large in our collective imagination.

Slice of pizza with AI written in pineapple

AI in art: spark or slop?

Another emotionally charged question about AI right now is this: what happens to creativity when machines can make art in seconds?

We spoke to Dr Alexy Karenowska, Magdalen Fellow and magnetician with a research group based in Oxford’s Department of Physics, who works extensively with AI and is fascinated by its capacity to create. She also runs an educational program which works to bring to life the relationships between the sciences, the arts, and humanities – and AI sits at the intersection of all of them.

When talking about art and AI, we imagine artists replaced by generative tools that spin entire paintings, poems, or songs from a single prompt. It’s led to debates about whether machines can truly be creative. But as Karenowska points out, that might be the wrong question.

If creativity is absorbing information and remixing it into something exciting then of course the machine is creative. It’s not human; but those aren’t necessarily the same thing. “I think it is interesting just how creative you could make a machine,” Alexy says. “But at the end of the day, you’ve got yourself a creative machine, not a creative human.”

AI gives more people the ability to create, regardless of training or skill – a technological leap that could democratise creativity.

We thought the camera would kill painting, but it sparked new visual languages. We thought the internet might flatten creativity, but it turned bedrooms into recording studios. We thought the printing press would cheapen authenticity, but it birthed the novel, the newspaper, the poetry pamphlet, allowed more people to access creative work, and put a Monet in every living room. All of these things inspired more creativity. Not less.
There’s an incredible amount of potential for AI to create huge breakthroughs in art production when good artists use it. Without good artists, AI can only make bad art.

It’s the difference between giving a paintbrush to a child and to Picasso. The tool is the same. The art isn’t.

The appetite for human-made work isn’t going anywhere. Art by artists will continue to hold both cultural and economic value, simply because there’s already so much money and time invested in it. As Karenowska puts it, “the real concern isn’t that AI will replace artists. It’s that it might stop ordinary people from making art at all.”
Today, anyone with a prompt and a browser can generate beautiful, surreal, hyper-realistic images. Generative art tools give people the ability to visualise what once lived only in their imaginations. That’s not necessarily a bad thing – in fact, it’s a kind of magic.

The worry is that, when the result is that immediate, the process might start to erode. Alexy talked about the desire to create being innate in all of us, but that “artists have been created by the coming of age of creative impulse. They’ve had to work to be able to create what’s in their minds.” Why learn to draw when you can render your dream instantly with just a couple of words? Why write a poem when ChatGPT will give you three?
“AI might not be killing creativity, but it could be making us forget what it feels like to work for it”, Karenowska says. Without that effort, do we lose the satisfaction, growth, and self-understanding it might offer?

You might think, “I’m not a creative – the deskilling of creativity won’t affect me.” But what about writing wedding vows, job applications, the first message on a dating app? We might not consider these acts of art, but they still require creativity. Everyday acts of expression rely on the same creative muscles as art practice: clarity, emotion, timing, voice, understanding, and inspiration. And more and more, we’re outsourcing them.

What happens when we stop writing, drawing, and creating these things ourselves – not because we can’t, but because it’s easier not to? Does it free us up to be more creative? Or does it gradually erode our instinct to try? Is this the future? Bookshops full of AI novels; galleries overflowing with AI paintings; streaming services populated by AI generated movies. Karenowska says, “Maybe.” Maybe the better question is: what do we want art to do? Could we possibly be satisfied by this kind of content?

AI isn’t inherently a threat to art. It could just be a new tool. It could open the door to a kind of meta-creativity, where the prompt becomes the medium and the human shapes the machine in a synthesis of bot and brain. But it only works in the hands of someone with vision, curiosity, and taste.

As Karenowska says, “It’s the difference between giving a paintbrush to a child and to Picasso. The tool is the same. The art isn’t.”

There’s always going to be room for human flair. The work comes in using AI to supplement our creative impulse, instead of outsourcing it.

So, what are we going to do about it?

Across work, education, and the arts, the same tension emerges. AI isn’t replacing us. It’s offering to do things for us. And we are increasingly willing to say yes.

The risk isn’t that it thinks better. It’s that we might stop thinking altogether.

It’s easy to understand the panic. Viral headlines and social media panic tip the narrative around AI into the territory of science fiction. We imagine our children taught by algorithms, our jobs replaced by bots, and all of us stupefied on our sofas in front of AI slop. No art, no artists, no jobs, no work, no thought: nothing.

The spectre of a Terminator or Matrix style AI revolution looms large in our collective imagination. But ChatGPT cannot think, feel, or act on its own. All it does is predict what you’d most like it to do based on data patterns. Its influence comes from how we choose to use it, and integrate into our lives.

The real concern isn’t that AI will take over, but how quickly we’ll let it do our thinking for us. How easily we’ll trade critical thought for convenience. Even if it’s less effective than we are at nuanced reasoning. Not because the AI is smarter than us, but because it’s easier to let the bot do it.

Spoon with Marmite of pink background