Hollow, world! (Part 5 of 5)
On the blue-pill nihilism of techno-utopianism and an Amish lesson in wheat-chaff separation
This is part 4 of a 5 part essay series exploring of the philosophical and societal implications of ultra-anthropomorphic AI. If you missed previous parts, read Part 1 here, Part 2 here, Part 3, and Part 4 here.
Blue pills and the nihilism of simulation
I know this steak doesn't exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realise? Ignorance is bliss.
― Cypher, justifying his decision to betray his friends and re-enter the Matrix.
In an Ohio town in 1972, a wealthy but mentally unstable car dealership owner named Dwayne Hoover went on a violent rampage at a local arts festival, attacking several attendees and causing chaos and injury. He was finally subdued and taken for treatment at a mental institution. Hoover, we learned, had come to believe that everyone he encountered was a robot, a mere machine that acted without any real autonomy. Animated by this belief, the notion of empathy for his fellow humans became meaningless to Hoover, and so began his delusional descent into nihilism, moral ambivalence, and violence.
Or so it goes in one of my favourite novels of all time, Breakfast of Champions by Kurt Vonnegut.
In the novel, the character Dwayne Hoover, who we first encounter already in a fragile mental state, becomes convinced that everyone is a machine after encountering the idea in a fictitious sci-fi novel by an equally fictitious author Kilgore Trout. In his mania, he becomes convinced that he is the only person with free will.
Dwayne Hoover may have started out mentally vulnerable, but one wonders if what tipped him over the edge into madness was not merely obsession with an idea, but the implications of that idea. If other people were mere machines, he had lost all opportunity for communion. In losing those opportunities, none of his social roles—as a family man, a business man and employer, and a community member—made any sense. It no longer made sense to serve because who was he serving? His social roles became absurd abstractions, his world hollowed out of all meaning. The tragedy of what happened to Dwayne Hoover is that he lost all opportunities for the intersubjective. In his distorted perception of the world, communion with other conscious beings became an impossibility, and it drove him to acts of cruelty, inhumanity and nihilism.
When we accept ultra-anthropomorphic AI as an adequate substitute for human roles, we are placing ourselves at risk of gradually slipping into a distorted perception of reality not dissimilar from Dwayne Hoover’s. Even if we can convince ourselves in the short term that the substitution is acceptable in the absence of better options, in the long term I suspect something in the human psyche registers the loss of real communion. But the risk in the interim―however long that may be―is that we continue shredding the social fabric, alienating ourselves further from opportunities for communion, until Hoover’s loss of the intersubjective and existential frustration becomes our own.
There is a brutalising effect that goes in the other direction, also. As British neuroscientist Anil Seth explained, when we are confronted with a conscious-seeming AI, we have two terrible choices. The first is to not even attempt to resist the illusion that they are conscious, to treat them as if they are, in which case we distort our cognition by ascribing characteristics to the machine that it does not possess. As Seth said, we end up caring about ‘hunks of silicone and code’ and then ‘caring less about other things that really do warrant moral consideration’. The second option is to resist our empathic impulses and behave as if we do not care about the agent, even though its conscious-seeming behaviours overwhelmingly make us feel like it is conscious. In this case, then ‘we end up brutalising our own minds’. It’s the same reason, Seth argues, that ‘we don’t rip up dolls in front of children even though they’re just made of plastic’ because we intuitively know it’s psychological harmful, for both us and the child bystander.
Yet tech companies are continuing to roll out super persuasive ultra-anthropomorphic AI which are already more capable at persuading us than other humans. This is happening despite (as I explored in Part 3) the proliferation of mentally vulnerable Dwayne-Hoovers-in-waiting already. Their mental fragility may not very often manifest as violent rage, but instead as the hordes of lonely and withdrawn young people in South Korea, or the young hikikomori hermits of Japan, or the nihilistic pessimism of China’s tang ping (“laying flat”) movement, who have simply “given up trying”. Similarly, they may be among the anxiety-afflicted masses of Generation Z who seek some solace in an AI companion.
It’s hardly their fault. Regardless of whether we ever intended it, we’ve created the social, economic and technological web of circumstances that have conditioned them for it. And that conditioning has tenderised us, making us more likely to accept the ideological assumptions built into the techno-utopian solutioning of the Silicon Valley elite with hardly a second thought. Not least is the notion that there is or ever can be any equivalence drawn between human intelligence and the intelligent-seeming outputs of a machine. That one is a Trojan Horse, as I argued elsewhere. As a recent cognitive science paper said:
‘When we think these [AI] systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition.’
Make no mistake, the thing inside the Trojan horse is nothingness. Not just empty, but an emptiness, a dark and abysmal void. The ideologues don’t recognise it as such, but there are latent nihilistic potentialities within the techno-utopian project, if not within the techno-utopian ideology then at least in its potential real-world outcomes. For example, billionaire tech investor and author of the Techno-Optimist Manifesto, Marc Andreesen, explained how he believed technology such as the ‘metaverse’ will improve the lot of those who fortune did not bestow with ‘Reality Privilege’—the advantage of living in an abundant, highly stimulating and fulfilling real-world environment:
‘[R]eality has had 5,000 years to get good, and is clearly still woefully lacking for most people; I don’t think we should wait another 5,000 years to see if it eventually closes the gap. We should build—and we are building—online worlds that make life and work and love wonderful for everyone, no matter what level of reality deprivation they find themselves in.’
Andreesen argues that those who possess ‘reality privilege’ would call this conclusion dystopian, and '“demand that we prioritise improvements in reality over improvements in virtuality”. Andreesen has a point, insofar as the project of bettering real-world conditions moves slowly and and that a failure to fully leverage available technologies that promise to alleviate deprivation may constitute morally negligence. However, the problem is not that such technologies are developed and deployed while we’re simultaneously pursuing the betterment of real-world conditions. The problem emerges when, whether we intend it or not, the technological solutions become an acceptable and permanent substitute for the betterment of real-world conditions. It is conceivable how, for instance, the mass deployment and uptake of such technologies unintentionally serves to reduce the urgency with which we resource and tend to real-world conditions.
Neal Stephenson coined the term ‘metaverse’ in his 1992 cyberpunk sci-fi story Snow Crash, which does indeed depict a dystopian world bifurcated between those who possess ‘reality privilege’, and those whose ability to thrive is almost exclusively digital. It is a world in which our hopes and dreams for real-world betterment have been diverted into virtual worlds, where a kind of ‘bait and switch’ has occurred at some point. Similar imagery is conjured in the 2018 film Ready Player One, wherein conditions for most people in that world have deteriorated so much that “there is nowhere left to go” except into the metaverse-like OASIS.
It’s not clear whether these conditions had emerged by deliberate design within either Stephenson’s fictitious world or the world of Ready Player One, but I suspect that―just as happens in reality sometimes―such dystopian conditions emerge through a failure to foresee the unintended consequences of our well-intentioned actions and how they may converge with broader socio-economic and ecological changes in unexpected ways. Either way, in both of these stories, the extent to which virtual simulacra have become acceptable and total substitutes for the real-world is the extent to which the latent nihilism of techno-utopianism―intended or not―have been permitted to manifest.
The ever-prescient Stephenson also anticipated personal AI tutor-companions in The Diamond Age, another techno-dystopia in which he crafts a kind of magical educational book that adapts to the child protagonist’s learning to allow her to excel in academic learning. But in the story, the seemingly magical book is reminiscent of the Mechanical Turk, because behind seemingly magical book is a voice actor, Miranda, who develops a maternal love for the child from afar, driving her to provide guidance that will nourish and protect the child as she grows. The message of the story was less one of the wonders of technology to supplant fundamental human roles like teachers, but a reaffirmation of those roles, of the very real love one human can have for another, and what that love can produce against the odds.
In view of the fact that market leading technologists seem to have missed this meaning and are sprinting headlong into a dystopia, I tend to share Ethan Mollick’s bewilderment when he said:
‘It is weird that immanentizing the eschaton is now a business model for some of the world’s largest companies.’
It is a strange preoccupation. Although I also see how it may hold appeal for some. Cast your mind back to the beginning of this essay series when I asked if you’d be unsettled to discover that your spouse’s expressions of gastronomical delight were merely hollow yet convincing simulations. I would still assume that such a revelation would disturb most people, although not all. Some may believe it to be even sweeter than reality. Such people may also have felt sympathy with Cypher in the 1999 film The Matrix. If you recall, Cypher is that film’s very own Judas Iscariot, expressing regret that he took the Red Pill which led to his unplugging from the matrix, a simulation constructed to hold human minds captive. Later, Cypher betrayed his fellow humans to the machines in exchange for re-insertion back into a perfectly simulated world.
The social ramifications of the defection of a single ‘Cypher’ from reality seems trivial. Yet if uninhibited tech developers storm toward a perfect simulation, the dopamine-jacking pull of that offering may prove irresistible, triggering a mass exodus of Cyphers from the often disappointing and uncomfortable real-world. This may be especially true for a generation bombarded by climate doom and who are continually being told they are “in danger of being the first generation in memory to have lower living standards than their parents’ generation”. Through the eyes of Andreesen and his ilk, Cypher must represent a distinct, large and potentially lucrative market segment worth pursuing.
Preserving communion
We need to be more concerned about what we are becoming than about what we are doing. Our doing should come out of our being.
Amish proverb
“We are told to be angry, bitter, and resentful about technology… we are told to be miserable about the future”, said Marc Andreesen in his Techno-Optimist Manifesto. Maybe that sounds like what I’m doing here. But no, that’s not it at all, really. I personally think AI is a marvellous set of technologies. I harbour no intention of going full Butlerian Jihad or throwing the proverbial baby out with the bathwater. I want to be clear that the only case I have wished to build in this essay is for agency in the face of a seemingly unstoppable technological behemoth. While we’re on the cliché metaphors, the thing I am arguing for is that we each are within our right to want to separate the wheat from the chaff, to assert that it is reasonable to peel the banana before eating it, to believe it feasible to avoid the thorns while still plucking the rose. This is not a case of wanting-to-have-my-cake-and-eat-it-too situation. It’s a I-only-ordered-a-cup-of-tea-so-please-stop-trying-to-feed-me-cake situation.
Ultra-anthropomorphic AI is chaff, peel and thorn. It’s the cake you didn’t order nor want for either ‘eating’ or ‘having’ purposes. As I have argued in this essay, maximally human-like AI is not only a deception that can erode the social fabric, but it is completely unnecessary, adding nothing to the practical value promised by the underlying technology. I conceded in Part 2 of this essay that a basic conversational interface on an LLM was indeed a laudable innovation, but there is nothing inextricably linking generative AI technologies with maximally human-like design. Therefore we do not have to merely acquiesce to the techno-utopian idealogues and accept these things as an indivisible package. Hopefully I’ve made the case clear by now that we shouldn’t accept it. Ultra-anthropomorphic design is calibrated to seduce, both literally and metaphorically, and we let it into our lives at our peril.
If we were not in a social and historical context of social fragmentation, I may well make more concessions and moderate my tone somewhat, because strong social capital and tight knit families and communities would likely serve as a constant counterweight to its influence. Alas, that does not describe most of the social contexts we inhabit today, particularly in the West. As such, we cannot expect that appropriate and healthy metabolisation of such a technology will just happen by itself. And beyond the potential for exacerbating the dissolution of communal bonds, there is much more at stake. Louis Rosenberg, the CEO & Chief Scientist at Unanimous AI made clear the scale of the threat at hand in a recent paper when he said:
‘[W]e must expand this view and consider generative AI not merely as a vehicle for producing traditional content at scale, but as enabling an entirely new form of media that is adaptive and interactive in real-time and can be personalized for individual users… [T]his new form of targeted influence could be used to manipulate individual users at scale and compromise human agency.’
So, where to from here then?
Well, do you know who probably knows something about separating wheat from chaff?
The Amish.
And I think they have some wisdom worth heeding when it comes to how we assimilate technology into our lives.
It’s a common misconception that the Amish are Luddites who eschew all modern technology. They don’t. Actually, the Luddites didn’t do that either. Rather, the Amish just have a very strong and deliberate vetting mechanism to evaluate cost/benefit balance of technologies before accepting them into their communities and households.
Amish communities—because they apparently still have communities, and I suspect I know why—take a careful and patient approach to evaluating the extent to which any prospective technology will either support or subvert the things they value above all. In the case of most Amish communities, the things they most want to thrive above all are family and faith. In Amish country in Lancaster Pennsylvania, for instance, "when a new technology comes along, its effect on the church and community is examined." They’ll experiment for a time with a technology to see its effects on the things they hold most sacred before making a decision. And it’s not always a binary decision. They may, for example, reject smart phones because of the moral dangers they perceive (perhaps quite rightly) to stem from continuous interest connectivity, social media, ubiquitous pornography, etc. But some Amish communities are happy to accept ‘dumb phones’, which enable staying in touch with family without turning family members into screen-distracted scroll-addicts at the dinner table. Similarly, the Amish may not be all than keen for an off-the-shelf fully-featured gaming laptop, but might be more than willing to adopt a stripped-back computer equipped only with spreadsheet software to help manage business finances.
The Amish, it seems, are good at asserting their agency. I thought of them recently when I stumbled across one of Tyler Cowen’s old posts about AI, which had a familiar “eh, mass disruption from AI is inevitable so let’s just plough on” vibe to it. I personally don’t buy Cowen’s conclusion that despite the risks, we should just “take the plunge” or indeed that “we already have taken the plunge”. I'd hazard a guess that the Amish might beg to differ also, and may argue that they (and perhaps we also) do indeed hold some power consciously exercise agency at this moment in history.
If adopting an Amish ethos seems a step too far for you, consider another framing. The heuristic known as Chesterton’s Fence is the idea that fences don’t appear by accident. They are created by individuals who designed them (or perhaps they evolved) with the belief that they would be beneficial. Thus, before dismantling a fence, we should first understand the reason for its presence. Similarly, before we commodify and outsource en masse the fulfilment of human roles to machines, be they intimate companions, teachers and therapists, or check-out staff, we may wish to consider how those roles may not be ‘simply that and nothing more’. We may wish to pause, walk around the role, and consider the myriad of ways that a person brings it alive, or that it brings alive a person, that it is a sacred space for communion, for intersubjectivity.
Of course, the rollout of such technologies seems unstoppable against whatever puny resistance each of us can mount. The tech companies are themselves trapped in a ‘race to intimacy’, the corollary of a multi-polar trap, an arms race of sorts that demands their unbridled participation if they are to survive. There is a way out of a multi-polar trap. It’s the same tactic that has served humanity well to stave off a nuclear holocaust since 1945. That tactic is cooperation for coordinated restraint. I am not holding out for tech companies to embark on any such cooperation, and lagging regulatory constraints don’t seem to offer much hope. But we—dear reader—can cooperate. Together at the level of family, within community, between friends, between parents, teachers and children. We can state clearly what it is that we value above all things, and we can deliberate and make decisions in those realms that preserve the things we hold sacred, the spaces between us that makes communion possible.
We may have little chance of healing the fragmentation of communities in a single generation. But we will have no chance of healing those wounds in this generation or the next if we start down the path of outsourcing our relationships to machines and vanquishing the web of shared meaning that nourishes us all.
Thank you for reading the final part of this 5 part essay series. If you missed any of the previous instalments, you can read them here: Part 1, Part 2, Part 3, Part 4, Part 5.
I hope you’ve found this essay series worth your time to read. If you have please drop me a comment below, and subscribe to receive future essays in your inbox.