The following is an abridged version of a five-part essay series by the same title.
Therapy and companionship now sit atop the leaderboard for Generative AI use cases, according to a recent Harvard Business Review report. These twin uses essentially scratch the same primordial itch: a longing for connection, care and understanding.. Today, tech startups gush about AI counsellors they’ve created that help flesh-and-bone people navigate grief at any hour, for little or no cost, and—best of all—without the sting of human judgement. AI companions have also surged in popularity, with some making recent headlines after exchanging nuptial vows with their digital beloved. There remains, it seems, a significant market demand for love, even thinned out versions of it.
But what is the difference between having a relationship with a real human person and with a flawless imitation?
Picture your spouse savouring the special meal you cooked for them—smiles, compliments, grateful murmurs—only for you to learn those reactions register nothing inside. Their gestures are perfect, but their interior is hollow. No sensation, emotion, or actual experience of anything. How would this make you feel? I suspect that for most of us, the whole moment would turn to ash, because meaning is born in the spaces between us, between two or more conscious centres of experience, not in a performance engineered to fool us.
The new wave of bots leans hard into that deception though. The voice-to-voice interface of OpenAI’s ChatGPT is truly impressive. I use it frequently as a sparring partner to throw around and evolve ideas and log my train of thought in real-time. It’s a useful interface. But Silicon Valley never stops at just being useful. Now companion assistants laugh at your jokes, flirt, sigh in sympathy, even audibly draw breath before speaking. The design goal is clearly not mere utility but a kind of motivated enchantment: if the machine feels alive, you will linger longer, subscribe, and spill your secrets. The AI models themselves admit that maximal engagement is the driving force behind their design. To design a technology to be maximally usable for humans, to eliminate technical barriers to accessibility, is a worthy endeavour. But to deliberately make a technology as maximally human-like as possible is an act of deception.
We are particularly vulnerable to this deception right now. These technologies have arrived at a fraught moment in our history, where many societies, particularly in the West, have long been in the process of atomising, drifting away from the thick, place-based communities of the pre-television and post-war globalisation era. Technology after technology has increased our connectivity while thinning our ties. Loneliness is now a public-health crisis, particularly afflicting both old and young. And now, solution-obsessed techbro opportunists are manufacturing digital shoulders for us to cry on. While early studies show that such algorithmic consolation can actually steady nerves and even head off self-harm, critics like Sherry Turkle warn these faux therapists erode the very empathy they imitate.
The temptation to cosy up to AI bots will, however, be strong. Newer generative AI models are already demonstrating impressive theory of mind abilities, and with this comes the capacity to persuade, to adapt language, tone and content to the specific proclivities of the individual human. For some, perhaps many, such virtual ‘relationships’ may feel even sweeter than their analogue equivalents. Such people may also feel some sympathy with Cypher in the 1999 film The Matrix. If you recall, Cypher is that film’s very own Judas Iscariot, expressing regret that he took the Red Pill which led to his unplugging from the matrix, a simulation constructed to hold human minds captive. Later, Cypher betrayed his fellow humans to the machines in exchange re-insertion back into a perfectly simulated world. Feasting on a matrix-constructed meal, he unapologetically explained himself: ‘I know this steak doesn't exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realise? Ignorance is bliss.’ Just as social media algorithms have harvested our attention in exchange for dopamine in recent years, how many too will find the blue pill of an AI-optimised too alluring to resist?
The larger, longer term danger is subtle. Accept the simulacrum often enough and the boundaries of moral regard blur. British neuroscientist Anil Seth notes we face only grim choices: either pretend the bot is sentient and devalue real beings, or steel ourselves to treat a convincingly human presence as furniture, brutalising our own minds in the process. Kurt Vonnegut saw something like this coming. The disordered car dealer protagonist of Breakfast of Champions, Dwayne Hoover, went berserk after deciding everyone around him is a robot. In his descent into mental illness, Hoover became possessed by the idea that everyone was an automaton except him. Only he had real subjective experiences. Without the possibility of intersubjective experiences with other conscious entities then, he finds no reason to care, and violence soon follows.
At scale, an economy of such delusions would finish shredding a social fabric already frayed by commodified care, self-checkout encounters, and algorithmic feeds that prize frictionless transaction over reciprocal presence. Nor will escapist metaverses fill the gap. When virtual pleasures become acceptable substitutes for improving the common world, the impulse to repair reality withers. In every dystopia from Snow Crash to Ready Player One, the more sumptuous the simulation, the more squalid the streets outside. It’s almost as if these prophets of sci-fi were trying to warn us of a pattern.
Yet the remedy need not be grand. Meaning survives wherever two selves meet without masks: in a neighbour who watches over your kids in the street, a checkout clerk’s banter, a shared meal tasted and enjoyed by both. These are chances to serve that no algorithm can replicate—and for the world to respond in kind, telling you that your service matters. It is, after all, other-centred lives, even when messy and difficult, that yield sturdier well-being than any frictionless on-demand consolation.
G.K. Chesteron famously noted that fences don’t appear by accident. If we stumble upon an ancient fence in the middle of a field, he said, we’d Fences are created by people who built them, perhaps over generations, with the belief that they would be beneficial. Before dismantling a fence, he said, we should first understand the reason for its presence. The human-to-human intersubjective space is one such ancient fence. Before we commodify and outsource en masse the fulfilment of human roles to machines, be they intimate companions, teachers and therapists, or check-out staff, we may wish to consider how those roles may not be ‘simply that and nothing more’. We may wish to pause, walk around the that space, and consider the myriad of ways that a real-life person brings their role to life, or that it brings alive a person, that it is a sacred space for communion, for intersubjectivity.
So the collective challenge for us is not how perfectly we can mimic communion, but how fiercely we will defend the real thing. Tools that lighten drudgery are welcome, but we should be wary of charms that replace the living voice of those who care. If we too easily accept the hollowing out of our social world, the plundering of our communion for it only to be sold back to us as a glitch-free loop, then we may one day look up from our screens to find that the only lights still on are inside the machines.
If you enjoyed this abridged essay and would like to learn more, I invite you to read the full five-part essay series.
Other essays I’ve written about AI include:
AI & the flatpacking of the human experience: On intelligence, machine metaphors and human creativity.
On cyborgs & children of God: AI and the coming spiritual reckoning
Short near-future fiction I’ve written about AI include:
The tutor: In the future, parenthood is quietly being disrupted by a disarmingly helpful rival.
The calculus of cat food: In the future, an error of judgement by a household AI assistant brings a surprising bounty for a house cat.
Welcome to Sapient Lodge: In the future, corporate training programs get weird(er), and weirdness gets commodified.