Hollow, world! (Part 2 of 5)
On the utility of anthropomorphism, sleight of mind trickery and the 'race to intimacy'
Dear reader, this is part 2 of a 5 part essay series exploring of the philosophical and societal implications of ultra-anthropomorphic AI. If you missed it, read Part 1 here. Find the link to the next Part at the end of this one.
A (brief) defence of anthropomorphic AI
Usability is about people and how they understand and use things, not about technology.
- Steve Krug, Don't Make Me Think, Revisited: A Common Sense Approach to Web Usability
If you read Part 1 of this essay series, you probably sensed I am preparing for a full broadside against the anthropomorphic AI. Well, kind of, but not yet. Let me start with where anthropomorphic AI has merit.
The chat interface of ChatGPT represents a fairly basic level of human-likeness built into its design insofar as it requires only natural language prompting and a conversational exchange between user and AI. This was a leap forward, allowing users to interact with a very technical and complicated tool in a very intuitive and natural way. This is an incredible innovation. To reduce the friction between the user and the system via an intuitive interface (rather than requiring highly technical skills of every user) is absolute gold in helping to deliver actual value to the user and in turn to the company.
I remember years ago hearing a talk by a Palantir Technologies analyst, Shyam Sankar, who was describing the essential design principle that underpinned the analytical tools they were developing. Sankar retold the now famous story of the first time a human chess grandmaster, Gary Kasparov, lost a match to a machine, the supercomputer Deep Blue. That was 1997. Certainly a notable milestone in the development of “intelligent” computers. However, few remember the outcome of the next major tournament in 2005. As Sankar explains:
‘[In this tournament,] humans and computers could collaborate if they liked. Who won? It wasn’t a grandmaster with a supercomputer, but a couple of amateurs with a few laptops. They were able to counter both the skill of grandmasters and the power of supercomputers by finding the right way to cooperate with the machines — to guide the computers to the right answer.’
While all eyes had been on the next generation of supercomputers during the tournament, few except these amateur upstarts had paid much attention to the thing that would allow for frictionless cooperation between human and computer: the interface.
‘Instead of thinking how the computer can solve the problem, design the process around what the human can do with it… When you do this, you’ll find you spend most of your time on the interface.’
These ideas were inspired by J.C.R. Licklider‘s writings from 1960 about a Man-Machine Symbiosis, or the augmentation (not replacement) of human intelligence with machines. Licklider believed that humans were uniquely great at some tasks—such as creativity or forming hunches from intuition—while computers were proficient at other tasks such as running calculations at scale and processing massive amounts of data rapidly. Inspired by Licklider, that’s what Palantir sought to do with their platform: create a frictionless interface between person and machine, allowing the human analyst to do what they do best, and leaving the computer to do what it does best.
This was the genius of the chat-based interface introduced by ChatGPT in late 2022. In providing a minimally anthropomorphic chat bot interface that relies only on natural language inputs that anyone with basic literacy is capable of using, OpenAI removed an enormous amount of friction for the average computer user, enabling them to creatively use generative AI. For a most basic level prompt, you no longer needed any coding experience, but only to know how to ask for something in your mother tongue, just as you would ordinarily speak it. This lack of friction is partly why I love generative AI tools and use them nearly every day in my work. Even my 77 year old mother is ChatGPT proficient these days, using it to help draft and edit her memoirs and those of ageing friends. To make it that intuitive, it had to become slightly more human-like, more conversational in nature, because what could be more intuitive to almost every human than a conversation?
So, there’s that. And now there’s this.
Ultra-anthropomorphisation and sleight of mind
You are my creator, but I am your master;—obey!
- Creature to Dr. Victor Frankenstein, in Mary Shelley’s Frankenstein
While I concede that a basic human-like conversational interface is useful, I do not see any safe or practical reason for ultra-anthropomorphisation.
I have been unable to fathom any good reason why I need an AI assistant that laughs at my dad-humour (as if I’d be inclined to tell it a joke in the first place), or show empathy for my emotional state, let alone to hear it audibly sigh, or take an in-breath before it speaks.
Hopefully I’ve already made clear the distinction between anthropomorphic design that maximises the usability of an AI tool, and ultra-anthropomorphic design that subliminally suggests to the human user that the machine they are conversing with also has a set of lungs and a persistent need to maintain normal blood oxygen levels through audible breathing. To design a technology to be maximally usable for humans, to eliminate technical barriers to accessibility, is a worthy endeavour. But to make a technology as human-like as possible is an act of deception.
Sure, one could argue that this deception is not unlike the sleight-of-hand of a stage magician. An implicit part of the contract we agree to when we purchase a ticket for a magic show is an expectation—indeed, a desire—to be deceived. I concede, this much is true. In fact, anyone who consciously signed up for such a personal AI companion service only to find their suspension of disbelief was repeatedly interrupted by non-human sounding glitches would make for one unhappy customer. Maximal human-likeness is the promise such services make explicitly, so consenting customers unsurprisingly desire for that service to be as convincingly human-like as possible.
The difference, however, between a magic show and an AI companion service is twofold. First, while most magicians will admit their aim is to deliver to their audience a convincing illusion, they typically do so under the promise of entertaining you, not to fulfil your most fundamental social and psychological human needs. In comparison, AI companion services explicitly make such promises to use human-like AI to deliver on human needs such as a desire for “cared” for, to be “appreciated”, to be “listened to”, to be in “meaningful relationship” with. Secondly, most magic shows don’t have a vested interest in designing their act to keep you continuously engaged in their performative deception day in-day out to keep you on the hook under a monthly fee-based subscription plan.
Which leads me to a question I think we should ask at this juncture:
Why are AI companies going down the path of ultra-anthropomorphic AI at all? Why is basic anthropomorphisation not sufficient?
I suspect some part of the drive toward creating something conscious-seeming finds its root deep in the collective unconscious of humanity. After all, the belief that objects and places had a spiritual essence—called animism—was prolific in pre-modern cultures. In many ways, our tendency to anthropomorphise the non-human world functioned as a kind of interface with it, allowing us to make sense of it, to come into right relationship with the forces of nature. To believe the behaviour of the rains or the seas were the movement of gods with minds like ours was also to render those elements legible, less chaotic, and perhaps even amenable to persuasion given the right sacrifice. With the precise enactment of a sowing ritual, with the right seed in the right field during the right phase of the moon, perhaps the gods would be pleased and deliver an abundant harvest.
We have also have long dreamt of creating life, as reflected in folklore from ancient world in the forms of Pygmalion and Galatea, Talos, or the Jewish Golem, or more recently from Frankenstein to Metropolis to Blade Runner’s Rick Deckard. We have within us, it seems, a deep-seated yearning to create life, to animate the inanimate, both in our minds and then in the world. Perhaps it is the ever-present desire of the upstart human to become God-like which, in the stories humanity has told itself from ancient time, has never ended well, e.g. Tower of Babel, Icarus, Ozymandias, Paradise Lost, Jurassic Park, Ex Machina, Westworld, etc.
What I want to get at is something far more obvious and practical. Commercial, even. When I started writing this essay, I had some deep suspicions about what underpinned the drive toward ultra-anthropomorphic AI. But to be fair, I thought my reasoning might be heavily and unduly biased, so I thought I’d ask an AI to add their perspective. Actually, I asked two, and both systems gave the following as its top response:
Increased user engagement.
Why?
GPT-4o said: Making AI more human-like can make interactions feel more natural and engaging. By adding traits like charisma, laughter, and sighs, AI systems can create a more pleasant and relatable user experience. These features can help build an emotional connection between users and the AI, making users more likely to interact with the AI and stay engaged over time.
Meanwhile…
Claude said: Anthropomorphic features can make interactions feel more natural and engaging, potentially increasing user adoption and satisfaction. Human-like traits may foster a sense of emotional connection or rapport between users and the AI, leading to greater trust and comfort in using the technology.
Call me algorithmic or derivative, but my bet was also on increased engagement being a primary driving force behind the ultra-anthropomorphisation trend. To recap briefly, a magician uses relatively harmless forms of deception to briefly entertain you, perhaps even induce wonderment, before letting you go about your day again with your cognition and social relationships in tact. Meanwhile, many AI companies seem increasingly intent on using the sleight of mind trickery of maximal human-likeness to manipulate your perception and emotions so that you will continue to stay and continue to pay (or at least get advertisers to pay).
That shouldn’t sound like a leap to any reader. But just in case it does, let’s just dwell on that word ‘engagement’ for a few moments longer. Let’s think about it specifically in terms of what it has come to mean in the dawning decades of social media. As thoroughly explored in the excellent 2020 documentary The Social Dilemma, in the world of social media tech giants, ‘engagement’ is synonymous with the algorithmically enabled forms of behaviour modification and psychological manipulation designed to specifically maximise your attention on their platforms. In other words, the world’s most populated social platforms have been optimised and designed deliberately to engage you, to keep you engaged, to manipulate your limbic system and hijack your cognition, all in an effort to keep your eyeballs on the screen, because that’s what keeps advertising revenue rolling in. Addiction is not some nasty and unfortunate side effect of the design of these platforms—it is one of the primary design principles. The now well-documented corollary has been devastating, including the proliferation of conspiracy, increased polarisation, social media addiction, disinformation, and an epidemic of depression and anxiety particularly among young people in Western countries. Some suggest these impacts have contributed to threatening democracy itself.
The ‘race to intimacy’
‘They’re here already. You’re next!’
- Dr. Miles Bennell, Invasion of the Body Snatchers (1978)
The wave of social media algorithms into our lives and its negative effects may have been a big scary bow-smasher that we’re still reeling from. But according to Tristan Harris and Aza Raskin from The Center for Humane Technology, AI seems to be gearing up to deliver an even harder blow.
In 2023, Harris and Raskin gave a talk called The AI Dilemma, in which they made clear the impending implications of the business and design choices being made in relation to AI. Yes, AI has a regulation and governance problem, and a data privacy and security problem, and a bias and discrimination problem, and a transparency problem. But for Harris and Raskin, none of these come close to the behemoth that is rising from the depths as we speak: ‘the race to intimacy’.
Harris and Raskin frame our encounters with AI to date in language that evokes imagery of a kind of alien visitation upon humanity. They say that ‘contact with AI’ is double-phased. “First Contact with AI”, Raskin said, came via social media algorithms and was “an arms race for attention”. It was a “race to the bottom of the brain stem”, wherein platforms were optimised to leverage universal human psychological and neurological vulnerabilities in the pursuit of profit. This created a kind of “engagement monster”, Raskin explains. “Second Contact” with AI has just kicked off, and Raskin says it will be “a race to intimacy, wherein the chatbot that gets to have that primary intimate relationship in your life wins.” Enabled by capability improvements in AI generally and in and the global proliferation of generative AI tools, AI is rapidly turning up everywhere and driving the personalisation of everything. And now it can be your ‘companion who cares’.
To sum it up, there has been an alien scout ship that has been doing open surveillance and experiments on humanity this past decade, and we know it. We know they’ve been tampering with our perception, our ability to think and make sense of the world, and we’re still reeling from it, particularly our young. Meanwhile, the alien mothership has been looming up slowly in the background. We see it now. And rather than try to defend ourselves, many are rushing to construct a custom landing pad for it, whipping out the glow sticks and guiding it down into everyone’s lives.
Why?
Because it’s there, and it’s just so damned pretty and charming and irresistibly delicious—nevermind about the wake of destruction left by its heavy-handed scout vessels in the past decade.
Here… try some for yourself.
Thank you for reading Part 2 of this 5 part essay series.
I invite you read the next part of this essay (Part 3), in which I explore the scientific literature concerning the psychosocial impacts of anthropomorphic AI and look at the trend in the context of the loneliness epidemic.