We must expect great innovations to transform the entire technique of the arts, thereby affecting artistic invention itself and perhaps even bringing about an amazing change in our very notion of art.
— Paul Valéry, ‘The Conquest of Ubiquity’ (1928)
I have based this article on a famous essay by the art critic Walter Benjamin. That essay was written in 1935, and it is entitled ‘The Work of Art in the Age of Mechanical Reproduction’. It negotiates the question of the status of the fine arts in an early twentieth-century context, where photography, film, and mass-market printing were changing the way people related to visual culture.
Benjamin argued that something is lost via the mechanisation of art. That something is ‘aura’ — a kind of physical, spiritual, presence.
Art without ‘aura’ is less obviously tied to a space, a ritual, or a specific person. When art is untethered to these things, it begins to lose its power because it becomes reproducible, interchangeable, and ultimately less capable of moving us. Without ‘aura’, images become easier to consume and disseminate, but it’s harder to justify why, or even whether, these things are ‘art’.
If the spectator does not consider what they are looking at as ‘art’, it stops them from thinking critically about it. When the spectator can no longer think critically about art, they lose out on thinking about the serious political, social, philosophical, or existential questions which art ought to ask us.
I’m here offering a speculative postscript to Benjamin’s essay. I’m considering what AI-generated images are and how they disrupt the ways we create and consume. My core claim is that while AI will of course be a destabilising force in the arts, it may also initiate change and innovation — but only time will tell.
What is, and what is at stake with, AI?
Artificial Intelligence (AI) is an umbrella term for machines that mimic human thinking. AI can range from simple rule-based systems to complex neural networks (like a human brain) that learn from data.
Generative AI models are fed thousands (or millions) of images, texts, music (etc). This data helps the system learn patterns, styles, and structures in existing art. The AI develops a kind of internal ‘map’ of how certain styles work. Once trained, the AI can generate images or texts based on these styles, initiated by user prompts.
This raises a plethora of ethical and philosophical questions.
For instance, are the images or texts that the AI produces truly ‘new’, if they’re based on pre-existing data? Perhaps they aren’t… or perhaps they are. Human-generated art develops through the absorption and reformulation of existing ideas too. The Italian Renaissance is an example. A work like Michelangelo’s Creation of Adam is a Christian reimagining of a Classical sculpture. If Michelangelo had been an AI, the prompt might have looked something like this:
“Hey, Michelangelo. Please make me an image of Genesis 2:7,* but do it in the style of the Ancient Greeks.”
*(God formed man from the dust of the ground and breathed into his nostrils the breath of life, thus making him a living being)


Even more complex is the question of authorship. How can we reconcile the fact that generative AI models have been trained on copyrighted artworks? The argument goes that if you use AI image modelling, you’re in fact stealing from the artists whose works the AI was trained on. That’s not good.
The most unsettling problem, though, is that of the implications of generative AI for our brains. I know that a bunch of people now defer to an AI, rather than to an empty piece of paper, when faced with the task of creating anything. Many people use Chat GPT to write their emails. Students use it for essays. It works famously well as a proxy therapist. People joke about its benefits as a romantic partner — though I’m sure this gag is more funny for some than for others.
I’m all for taking advantage of new technologies (had we not, we’d still be writing on clay tablets); however, we should legitimately intrigued, if not concerned, about the ways that Chat GPT might ultimately change the way we think.
AI may expedite the process of creation… but maybe the process is the whole point. If we lose the process, is the result at all meaningful? It’s only meaningful insofar as it helps us pass a test or makes money: which isn’t meaningful at all, really — it’s just capitalism.
Copying is Different when it’s Done by a Machine
Art has always been reproducible, and it has always been reproduced. You could argue that the history of art consists entirely of acts of copying, emulation, reinterpretation, reception, and response.


But copies, unless they are creative, original reinterpretations of ideas, are never quite as powerful as the thing they were copied from. They can never occupy the same space, nor can they hold the physical history of that object.
For instance, you know that what makes the Mona Lisa in the Louvre interesting is the fact that it is the one painted by Leonardo Da Vinci. The machine-printed postcard of the same picture in the gift shop downstairs doesn’t cast the same shadow. Even a really good drawing of the Mona Lisa has only a fraction of the ‘aura’ of the original — or maybe a different kind of ‘aura’ altogether.
The key difference between copies made by humans and those made by AI is this: human copying involves aesthetic judgement or critical thinking. The human copyist has to ask themselves what is being copied, how that copying is taking place, and for what purpose (education, reinvention, homage) that copy is being made. AI cannot do that.
So, what degree of ‘aura’ might be retained in an artificially-generated image of the Mona Lisa, or The Creation of Adam? Probably none at all — because ‘aura’ is all about originality, authorship, and presence.
But even if an AI-generated version of the Mona Lisa or The Creation of Adam possesses no ‘aura’ it still might have the potential to affect the viewer. If a viewer knew nothing about Leonardo or Michelangelo’s paintings and they saw AI-generated versions of these pictures, those pictures might still incite their interest and aesthetic appreciation. Perhaps even evoke religious feeling. And for some people, that would be sufficient to make these AI-generated images real, legitimate, ‘art’.
However, if you think that the significance of art lies in the creative process — in the spiritual reverence or fulfilment which man experiences in the process of making — AI-generated art is expressly non-art.
How We Got Here, and Where It’s Going
Benjamin’s theory of the history of art is this:
The earliest artworks originated as a form of ritual. Cave paintings, for instance, were created with the intention of connecting with the spiritual and magical world.
Religious art — icons, relics (etc.) — were the same thing, only more developed.
As mankind progressed, art continued to be connected to ritual, but became increasingly secular. Benjamin calls this the ‘secularised cult of beauty’.
The advent of photography destabilised the meaning and purpose of art, because art was no longer tied to a physical space or person. Benjamin believed that the more immediate, simplified, and less tied to ritual and space art became, the more easily it could be abused and misinterpreted.
I wonder whether we’re now at the next stage in this progression. AI-generated content is both a product and a symptom of the ultimate commodification and simplification of art. AI-generated imagery sort of feels like art, and it sort of operates in the world like art, but it’s freed from the confines of authorship and even the physical world.
Is this a new kind of art? Or is it like artificial, plastic food? — it looks shiny and appealing on the surface, but it contains none of the nourishment we expect when we start trying to digest it.




The Bad News and the Good News
AI art gives us an illusion of a visually creative and receptive experience, without providing that experience in full. When you are generating AI art you aren’t really creating anything new. This leaves very little room for innovation, critical thinking, or boundary-pushing.
It also means that the more AI-generated content we produce, the more AI systems will have to be trained on their own content. The content gets progressively worse, and this produces an interesting phenomenon called ‘slop’, which is a kind of uncanny, low-quality, nonsense art.
So, where does this leave us?
Well, if you’re an optimist, I have good news for you. When photography was invented around 1850, it wasn’t the end of art. In fact, it initiated a whole wave of revolutionary forms of art: Impressionism, Post-Impressionism, Modernism, Abstractism, and Postmodernism.
There is a possibility that AI will initiate a similar change. It has the potential to open up new avenues of creativity and meaning. Though I certainly can’t tell you what those might look like.
I hope it’s not the end of culture.
We’ll see.
Thanks for reading! Check out my Instagram at @culture_dumper and my Tiktok @theculturedump, where I post daily updates on my academic work, life, and current exhibitions.
Such an important topic and I love the way you address it! There is something so deeply unsettling about AI art. I love the idea of this “aura”, which to me feels like the basic essence of humanity. Art is created from an URGE or DESIRE to create something, to express an idea, to explore a concept. When there is no desire behind art, there is no art. A computer has no desire to create, it doesn’t FEEL what it generates. The artist then becomes the human writing the AI prompt, but I completely agree with you that the PROCESS is kind of the whole point of creating art. To sit behind a computer, type out a few sentences, and sit back as an image is generated, is not a process at all. It takes no real effort.
Thanks, I really enjoyed this. A couple of thoughts I had:
- I sometimes see art as communication between the artist and the viewer, conveying the artists view of the world at a particular point in time (both temporally and spatially). It's much more abstract to look at AI art in this way given its training data of millions of images and black box in terms of how it got to the output.
- I think as AI slop increases and AI and the digital world becomes more of our lives, art in the real world will take on higher value as people seek out genuinely enriching experiences and different view to the only mayhem.
- Scott Alexander has made a good point that art and aesthetics can invoke a sense of awe. Maybe AI art doesn't quite capture this. But its possible to view how AI works as pretty amazing when you break it down. Essentially manipulating electric signals has led to AI models that can discover new types of medicine... how incredible is that! Much more bullish on these types of AI models that using AI for art.