There’s a certain kind of visceral unease that people have toward AI-generated content. A sense of betrayal, maybe indignation, towards seeing images, articles, ads made by AI. You know the feeling even if you’re tech-positive; it’s grotesque somehow, almost offensive.
Ask people why and the critiques will come quickly: AI steals from artists, it’s soulless, it’s cheap. It’s not real, because it’s a machine, and after all machines have no creativity. They only ever reformulate what they’ve been given. And its true, a lot of AI content can feel pretty empty. The term “AI slop” evokes exactly that – something tonally off (not “discursively situated,” as my professors might say), lazy, without a clear message; all form and no substance. But the reason we have a term for slop is that there’s also the stuff that’s not slop, as anyone who has spent time with AI (and even official competition judges) can tell you.

People’s misgivings can be more or less informed, but I think a lot of the time it’s really coming from a place of defensiveness. Creativity is one of those pillars we’ve chosen to base our collective humanity on, the same way we’ve held the “soul” as an intrinsic marker of personhood, the same way Enlightenment thinkers turned to rationality and free will to explain what set us apart from other animals and the natural world. Now we live in a world of intelligent computers, and we’re back to leaning on feeling and creativity. Call it “human narratives of exceptionalism” if you want – we’ve always needed a reason to feel special. Sometimes we confuse that self-soothing with insight.
Most artists will tell you that meaning emerges somewhere within the interaction of artist, artwork, and audience. It’s the process of formation, what you discover as you create, what it says to you, to others, to the world, it’s the bridge built between one mind and another. As a writer, I’d add that the core of any work is having something you want to say, a feeling to speak to, and every element should coordinate in creating its effect. Good art isn’t just about beauty or technique, as much as people may imagine it to be. It moves because it articulates something human.
The problem with AI work isn’t that it’s intrinsically meaningless, but that it flattens these distinctions. It’s that your lazy neighbor down the block can create professional-looking pieces without having to think, to deliberate, to put in any effort to create a vision or resonance with the work. It’s that people who have nothing to say can write scripts and comments and articles pretending they do. These pieces sparkle just long enough to deceive, to impress for a brief moment, before revealing they’re hollow. It’s betrayal. We gatekeep AI content because we are trying to defend intentionality. Precision, fluidity, style – every detail conveys register, care, time, skill, some kind of reality behind the result. Except it doesn’t anymore. Form isn’t meaning; it has lost its referent.
Form untethered from meaning was what French theorist Jean Baudrillard described in his simulacra theory. In the wonderful language of semiotics, signs are meant to point to real-world referents: “live laugh love” is originally a genuine motto, a gesture toward a certain ideal of domestic happiness. But signs float on their own, accumulate associations that mask the original meaning. “Live laugh love” becomes not just a motto but an aesthetic, a branding to associate with. Eventually, “live laugh love” turns into a parody of branding techniques, an expression not of domesticity but commercialization. It’s impossible to say something like that earnestly anymore, it feels like a joke. The sign no longer refers to the original reality, only other signs; the referential loop is closed.
There’s nothing intrinsically sinister about aesthetic drift; it happens naturally everywhere from financial speculation to language evolution. But something changes when culture runs on symbolic impostors. Instagram is suffuse with influencer aesthetics, #ThatGirl who glows at 6 am and travel montages auditioning as Hallmark movies. Theatres run endless remakes that have long since lost their creative vision. We even add lurid dyes to our foods to make them feel real, because real food colors have started feeling fake. None of these things are what they present themselves as, and on some level they’re not even claiming to be. I love the “clean girl aesthetic” because it’s so transparent – it’s really just plain normalcy, but we brand it because it’s less of a style in itself than a response to other styles, a gesture of social fluency. This is Baudrillard’s simulation.
You can’t understand politics without seeing this displacement. The ire of left populism, for instance, does lie partially in sputtering wage growth and increasing inequality. But it’s also people having to work for companies they don’t trust anymore, who speak in HR and dress layoffs in the language of personal growth. We are surrounded by things – consensus media, public health experts, symbolic activism – that no longer mean what they are, and yet demand our belief anyway. The anger at “the system,” however people may articulate it, is a deep felt sense that the language we speak and rituals we partake in are removed from real life.
There’s something deeply carnivalesque about our politics today, in its true, medieval sense. It is perversion – inversion – of the social order, where hierarchy and moral codes come undone. The fool wears the crown, the crowd roars at authority, and for a moment vulgarity becomes the language of truth. The existing order is exposed. Of course, the carnival itself was only ever ritual; order didn’t truly collapse, it allowed itself to be temporarily excused. Trump is a very rich man who claims to speak for the commoner, because the true commoner never gets to speak at all. Modern liberals don’t seem to understand this; they decry the irony, the norm breaking, the offensive language, when it’s precisely the point. The world already feels like pageantry and he makes a mockery of it.
Our carnival, though, doesn’t end in release. There is no Lent at the end, no shared moral code, no clear system even to rebel against. People name the problem in parts and direct retribution at the ones they see; there is no resolution because we lack a shared reality behind the layers of simulation.
It’s in this kind of world that generative AI is emerging – one with such tenuous connection to meaning, where we regularly accept that images don’t quite mean what they are. When it comes to AI though, images don’t have to mean anything at all. Google recently released Veo 3, a video generation model capable of creating stunningly realistic videos from text prompts and image references. For Veo 3, reality is inverted; images don’t reflect reality, they literally create it.1
As people have warned, models like Veo 3 could be used to create dangerous deepfakes – ads for products that don’t exist, news stories of events that never happened, exploitative imagery of people without their consent. Maybe, hopefully, these things will be regulated with some kind of verification or watermarking system. But where does this leave our psyche? Sensory perception exists on a different frame than other information. It is preconscious, happening before any conscious interpretation – the mind matches sensory data with its own predictions, building a perceptual experience that the body acts on before any awareness. This is why optical illusions work and jumpscares can happen from a screen. What we see is true and real to us on a visceral level that no amount of later reinterpretation can fully undo.
Imagine looking at a world that seems real and knowing that means nothing. Imagine regularly seeing things that you know do not exist – not edited, not animated, but fully fictitious – and yet look immersively real. How do we navigate a world whose reality is increasingly, on its most visceral level, forgeable? The problem AI poses for us is not new, but it is deeply, deeply escalated, and for the first time, clearly existential. So far, we have as a collective largely failed to address the social ruptures of the digital age. Perhaps this will finally force us to.
Language models work similarly in that they primarily model language behavior, and any useful intelligence arises from there. Telling ChatGPT to be “an expert marketer with years of experience in the field” doesn’t magically change anything apart from trying to elicit different intelligence out of different language. Vibes are reality.
Have you ever read of a thing called a ‘Trojan Horse?’