Rise of a “useless class”
A kind of bleak fatalism has formed around artificial intelligence (AI). It’s an unsettling mix of awe, anxiety, and resignation that runs something like this: AI has gotten really good, really fast, and it’s expected to get even better, threatening to not only take many jobs, but to eclipse humanity itself.
Elon Musk says “probably none of us will have a job” under a “benign scenario”. Henry Kissinger called artificial intelligence the "biggest challenge of our times”, warning that machines could replace humanity within five years. And Bill Gates recently said that in the near future, humans will not be needed “for most things”. This barely scratches the surface. Many of those closest to the technology, its builders, funders, and hype men, talk as if the next phase is inevitable, as if mass human redundancy has already begun. AI now seems not only capable of generating essays, video and code, but also bucketloads of anxiety.
It’s not unreasonable to feel unsettled, of course. AI belongs to a rare class of technologies, those like the steam engine, electricity, or the internet, that reshape the underlying structure of society. These are often called General Purpose Technologies, and history shows they tend to bring rupture. But AI may be something stranger still. Unlike the technologies that came before it, this one augments the very faculty that made all other technologies possible: our intelligence.
AI is now performing tasks that, for most of human history, only we were capable of doing. A Stanford report showed that generative AI now outperforms humans on key benchmarks like language comprehension, logic, visual reasoning, and image classification—though it still lags in tasks like complex maths and long-term planning. New models are even demonstrating early signs of theory of mind, the ability to infer what others are thinking, a key aspect of empathy and social intelligence. GPT-4, for example, performed on par with a six-year-old in recent theory of mind tests. Meanwhile, in a recent Turing test, GPT-4 was judged to be human 54% of the time, which becomes particularly surprising when considering that actual humans were correctly identified only 67% of the time. The perceived effect of these advances is compounded by the rapid enhancement of voice and test-based AI chatbots to sound and behave in maximally human-like ways, even to the point where they being marketed as therapists or companions for the lonely (a subject I’ve written about in depth over here). These gains, made in just the last few years, suggest that AI is not only catching up in capability, but also beginning to shape our own perceptions of the technology, we’re starting to see it as being increasingly like us.
But this isn’t a sudden rupture. It’s part of a much older pattern. The story of technology has always been about extending our capabilities, making work easier, more efficient, and boosting human productivity. If we were to map the full range of human capability, we might divide it into two broad domains: physical and mental tasks. From the earliest tools, humans began outsourcing physical labour to technology, as well as beasts of burden (and often a combination of the two). This process accelerated during the Industrial Revolution with the advent of steam power and electrification, and by the 19th century, machines had begun to encroach on basic mental tasks too, especially with the rise of mechanical calculation. The 20th-century computing and robotics revolution dramatically expanded the reach of machines into the cognitive domain, automating more and more of what humans once did mentally and manually.
Now, with the arrival of artificial intelligence, that encroachment has leapt forward again—kicking off what many are calling the Fourth Industrial Revolution, and leading forecasters at the World Economic Forum to predict that 92 million jobs will be displaced by AI between 2025 and 2030, with 41% of companies planning workforce reductions. Meanwhile, a recent Pew Research study found that one in five U.S. workers holds a job highly vulnerable to AI takeover, meaning their core tasks could be replaced or fundamentally altered.
A flood of such predictions have been accompanied by techno-prophets such as Yuval Noah Harari, the historian who wrote the bestselling book called Sapiens, to declare that AI will create “a class of useless people”, a whole cohort who can offer little to no economic value in a world saturated by cheap, efficient and effective machines.
So the question that hangs in the air now, the one that shows up in headlines and conferences and quietly at the backs of minds, is an unsettling one: what will be left for us?
Homo Sapien: the exceptionally capable creature
Yet AI is not merely a technological disruption with economic, political, and social consequences. Tectonic shifts are coming, without doubt. Yes, AI is accelerating job displacement, eroding trust through deepfakes and disinformation, entrenching surveillance, undermining democracy, and concentrating power in the hands of a few. In the realm of security, it threatens to automate warfare, escalate cyber conflict, and behave in ways we may not fully control. Ethically, it risks hollowing out human meaning, delegating moral judgement to machines, and—at its most extreme—posing existential threats.
But something even more profound is brewing, something deeper than tectonic, something that reaches the core. More starkly than any technology before it, AI is forcing us to confront an ancient question:
What makes us uniquely human? Are we truly exceptional among all animals?
As machines begin to mimic and even outperform us across an increasing number of capacities, we find ourselves asking what truly sets us apart. The most common move in response to these conditions is a kind of strategic withdrawal to defend an ever shrinking set of higher order human capacities such as metacognition, problem solving and creative intelligence, social intelligence and the ability to cultivate relationality and intersubjectivity, the possession of free will and exercise of moral agency. In the immediate years ahead, these capacities do indeed seem like a safer bet than run-of-the-mill skills of coding or logical reasoning. There remain ‘computerisation bottlenecks’ around each of some capacities. That is, it’s hard for a machine to do these things well with currently available technology, and those technical difficulties may persist for some time yet.
In addition, some of these bottlenecks may not be mere technical barriers. For instance, some of these capacities seem to be inextricably entwined with consciousness itself, that mysterious quality of being aware, of having a subjective experience of what it is like to be you. Humans still hold an advantage in social intelligence, for example, precisely because we are conscious beings: we have an inner sense of self, we experience the world as meaningful, we can imagine what it’s like to be another conscious being, and we can co-create shared meaning with others. If machines are not conscious, then none of this is possible, and perhaps we can be filled with assurance that the assault of AI on human capacities will stop dead at the gates of the last bastion of consciousness.
It follows that those of us wishing to ‘future-proof’ ourselves for work in the coming decade may find solace and benefit—at least for now—by further developing these capacities. Indeed, if I were a betting fellow, I’d wager that there are a few capacities that humans possess that will remain uniquely human for some time yet, perhaps indefinitely. The first of those capacities includes deeper forms of creative cognition (such as wisdom, judgement, intuition, aesthetic taste, etc) that depend on a complex interplay of embodied cognition, perspectival metacognition, contextual and cultural intelligence and moral aspiration.
The second would be social and emotional intelligence, which depend on being able to experience emotion, imagine and accurately anticipate the experience of others, and respond appropriately from that place of understanding to foster connection. Machines can feign empathy, but they do not feel. And because they do not feel, they cannot foster an intersubjective experience with us, and without that, genuine connection is impossible.
The third is moral agency, which because it’s tied to free-will, and free-will is only possible with consciousness, may be enduring for as long as machines are not conscious. Of course, machines can follow rules or simulate ethical reasoning, but without consciousness or free will, they lack true understanding and a sense of responsibility. A moral agent is instead someone who has not only cultivated a sense of wisdom, courage, and self regulation, but also an ethical compass that points toward fairness, justice, hope and charity. To exercise moral agency is to use one’s free will to take responsibility for ensuring the action we take in the world aligns with these values. In an AI-powered age, it seems clear enough to me that there will remain enormous demand for human beings who can wield that responsibility well, who can make wise decisions that make the world better and more just.
For the immediate future at least, strengthening these very human capacities may constitute a unique value proposition in a market full of machines. What seems obvious is that we won't need more humans who are good at behaving like machines - machines will be cheap and abundant. We'll need humans who can live into the fullness of the capacities their humanity affords them.
However, that isn’t quite the whole story.
The problem with capacity-based human exceptionalism
There are two problems with the rationale that an enduring human exceptionalism can be based on capacities alone. The first has nothing to do with thinking machines. Even before any encroachment of AI, neither consciousness nor the capacities it confers seem to belong to humans alone. New Caledonian crows, for example, have been observed using tools in sequence, using one to retrieve another, then using that to access food, and have even fashioned tools from leaves and wire to solve novel problems. In one famous study, a crow bent a straight wire into a hook to retrieve a bucket from a tube, something it had not been trained to do. Trees, too, exhibit remarkable relational capacities. Through underground fungal networks known as mycorrhizae, they communicate and cooperate, exchanging nutrients, warning each other of pests, even favouring their kin. Bottlenose dolphins have been trained to opt out of trials when uncertain, a behaviour interpreted as awareness of their own uncertainty. This kind of confidence-based decision-making is seen as a key marker of metacognition, once thought to be the exclusive domain of human minds. And in a 2003 study, capuchin monkeys rejected unequal treatment, discarding cucumber slices when they saw a neighbour receive a grape for performing the same task, suggesting a visceral sense of fairness. The more closely we look, the more these capacities appear not as uniquely human traits, but as expressions of a deeper pattern that runs through life itself.
The second problem arises if we take seriously the possibility of the emergence of consciousness within AI, that at some point, a distinct and felt sense of subjective awareness becomes possible for an AI. I personally find this scenario implausible, but if I am entirely honest, it is more wishful thinking than anything robust. Any rationale I can muster for this position rests on shaky ground. For instance, if consciousness is merely the emergent property of the biological systems like the brain, as is presently the dominant assumption among neuroscientists and philosophers, then perhaps the “hard problem” of consciousness will eventually be solved, and with it, the possibility of conferring conscious experience upon machines.
But even if the “hard problem” isn’t solved, we are still left with a problem. An alternative hypothesis to the brain-centred emergentism hypothesis, proposed most compellingly by philosopher of mind David Chalmers, is that consciousness is a fundamental property of the universe itself, baked into the structure of reality, a universal building block of nature. This view is called panpsychism, and it suggests that all things, even the most basic particles of matter, have some form of mind-like or experiential quality. It is not to say that electrons are ‘thinking’ or having emotions, but rather that there is something it is like to be an electron, some primitive form of experience, however faint or basic. In other words, panpsychism doesn’t imply that consciousness as we humans know and experience it is everywhere, but that the building blocks of consciousness might be present even at the smallest scales of reality. It follows that if consciousness is fundamental, or at least prior to biological processes, this need not represent a hard limit for AI. Within this framework, even a machine, which is made up of the same basic particles as everything else, might one day be capable of hosting a trickle of conscious experience, to the extent its architecture can carry that signal.
A coming spiritual reckoning
So where does that leave us? What coherent position can we take up if a defence of capacities alone, even capacities conferred by consciousness, or even consciousness itself, are all potentially precarious?
This is precisely the launching pad for transhumanism. The transhumanist project, the emerging quasi religion of our time, places its faith in the coming of a great superintelligence with whom we will either merge or be superseded by. And because it is an ideology ensconced firmly in strong physicalism, the view that all phenomena, including consciousness, can ultimately be explained in physical terms, they do not cling to any notion that humans are somehow exceptional. On the contrary, we are “ape-brained meat sacks”, according to Elise Bohan, a prominent Oxford University-based transhumanist philosopher, echoing AI pioneer Marvin Minsky’s famous “meat machines” line. “We weren’t here in the beginning,” said Bohan, “we will not be here at the end... and we clearly aren’t the be-all and end-all of existence.”
There is, according to the dominant thread within transhumanism, nothing exceptional about humans. Indeed, among transhumanists, there is a mix of ambivalence and even enthusiasm about the idea that the biological human may soon be obsolete. Humans, in this view, are merely transitional, a stepping stone to something greater, midwives to greater forms of intelligence. Bohan put it succinctly in a debate hosted by UnHerd in 2024:
‘I don't think it's necessarily bad that in the long run we end up seeding forms of digital intelligence that might be smarter than us. This has happened before, with our big brains. We proved to be better adapted to life in the Holocene than many of our competitors. The AIs of the future may prove themselves to be better adapted to life in this transhuman world we’re entering now.’
Transhumanists have performed an audit on humanity’s greatest capacities and found them wanting. We are, they contend, a mere link in an evolutionary chain. In that context, it makes sense to yearn for, and indeed serve as agents of, the coming of a more evolved intelligence, one so advanced that it might save humanity from its folly, and remake the world in its post-human image.1
Imago dei and human vocation
If, like me, your impulse is to rebel against this misanthropic nihilism, if you have an intuition that physicalism, and therefore transhumanism, has missed something, then I have another question for you:
What if we are not exceptional because of the capacities we possess, but because of our purpose? What if our worth lies not in how we measure up to other creatures or machines, but in the role we are called to play within the wider fabric of existence?
Let me be clear. These are questions of metaphysics. They form the basis of a spiritual inquiry. And if transhumanism is the logically consistent endpoint of the physicalist worldview, and we do not like the misanthropic nihilism it portends, then spiritual questions are the only important questions left to ask. Transhumanism, after all, is a coherent outgrowth of the physicalist worldview that has dominated the modern mind. Even for those ensconced in physicalism, the transhumanist vision may be unsettling, because it says the quiet part out loud: that in the grand arc of the universe, we do not fundamentally matter all that much. In that view, it makes little sense to speak of a human “purpose” or “calling.”
However, the notion of a human vocation does becomes possible if we’re willing to engage in a metaphysical inquiry.2 I say that self-consciously because I am part of a culture that has all but eradicated metaphysics from public conversation. In the Anglo-European culture shared across the West, to take spirituality seriously in the public square, beyond the confines of one’s home, religious institution, or some Eat-Pray-Love-style meditation retreat, is to risk a loss of credibility. And yet, here we are, being forced to contend with the question of what, if anything, we were made for. G.K. Chesterton warned against toppling fences before we know why they were erected. In that spirit—and in the interest of avoiding a slide into scientistic hubris, perhaps even diehard secularists and anyone who finds the transhumanist vision repulsive and the humanist defence too feeble, might willing to entertain this possibility: that many of the world’s great religious traditions may have had something important to say on the question of human vocation. After all, it is the modern secular mind, not the religious one, that is the anomaly in history. Such traditions have been contending with the question of human identify and purpose since long before it could be recorded.
Indeed, almost all the world’s religions, to varying degrees, have claimed that the human being is exceptional. Religious perspectives on human distinctiveness span a wide spectrum, from traditions that place humans in a uniquely elevated position to those that emphasise deep interconnectedness with all life. Abrahamic faiths (Judaism, Christianity, Islam) generally teach that humans are made in the image of God, endowed universally with inalienable dignity and, more particularly to our species, a special moral responsibility and a vocation to steward creation. Eastern traditions like Hinduism and Buddhism affirm the human calling as one of a quest toward spiritual liberation, while maintaining that all life shares a sacred essence. Indigenous spiritualities often see humans as specialised members of a broader web of life, a custodial species that is unique, although not superior. Across traditions, there is a shared intuition that the human role in the cosmos is marked not just by what we can do, but by who we are called to be.
The Jewish and Christian traditions are perhaps the clearest on this point. In these traditions, human identity is not grounded in mere capacities, but in imago dei, the idea that humans are made in the image of God. This doesn’t just mean that we share divine attributes like reason or morality. The ancient Hebrew word for "image," tselem, which was used to describe the idols made to represent kings and the extension of their sovereignty in their dominions, suggests something deeper: a calling. It conveys the sense that humans are named and tasked as God’s representatives within creation, charged with care and stewardship, bound in relationship to the Creator, and in service of Creation. In this view, human worth is not dependent on outperforming animals or machines. It is bestowed. It is covenantal. It comes not from what we do, but from the role we have been given in the moral and spiritual architecture of the world, a role that is not diminished by the fact that we share certain capacities with the non-human world, whether animal, plant, fungal, or even machine. And this view is not limited to Christianity and Judaism. Many Indigenous traditions, too, hold that humans are part of an interconnected web of life, with particular custodial responsibilities, called to care for the earth and maintain harmony with the natural world, not as conquerors, but as caretakers.
A bifurcated world
In the end, this is how we arrive at a spiritual crisis. AI is forcing our hand. If our uniqueness as humans is bound up with our capacities, our cognition, our creativity, our dexterity, then that uniqueness may be precarious. We have already ceded a lot of ground to machines here, and while I personally believe some capacities will remain beyond machine capabilities for some time yet, it is increasingly untenable to maintain an absolute stance on this question. Machines have bested us at games of strategy, language, and design. They are learning to recognise emotion, generate art, and perform delicate tasks with increasing finesse. The last bastion may well be consciousness itself. But even here, no one can say with confidence what will happen. There is no definitive evidence to prove whether machines can or cannot become conscious.
If that line does fall, if artificial systems somehow cross the threshold into genuine consciousness, then the only defence left against nihilism is not a scientific one, nor a philosophical one, but a spiritual one. And if God is truly dead, as Nietzsche once declared, then the transhumanists are already the new prophets.3 They are the ones telling the story of what comes next: a post-human world, where we are either enmeshed within vast superintelligent systems, gently rendered obsolete, or violently erased. In this view we are merely “the sex organs of technology”, as Marshall McLuhan once put it, mere biological scaffolding for a future that will no longer need us. At best, we become co-conspirators in the rise of a new kind of being, aligned to an alien set of values. At worst, we become useless humans, vassals, or extinct animals who overstayed their welcome. But make no mistake, this is not just a spurious vision of the future, but an ideology on the rise, one built on the foundational assumptions of physicalism. The breakneck pace of technological advancement seems to almost demand a response, one way or another, or potentially have our future colonised by things that don’t necessarily mean us well (to paraphrase mythologist, Martin Shaw).
The nature of our relationship to these powerful new technologies is already being shaped by how we answer the question of what we are. I suspect many will accept the transhumanist vision and its physicalist tenets, and perhaps even its consequences: a surrender of any special human role or responsibility in the cosmos. On the other hand, I suspect many will rediscover to the idea, at least in some form, that humans are imago Dei, image-bearers of God, with worth that isn’t earned or made obsolete by technology, but rooted in a sacred and unassailable vocation.
It’s not lost on me—though it often seems to be lost on many transhumanists—just how thoroughly quasi-religious this vision is: a coming superintelligence, the promise of transcendence and being uploaded to the “cloud” or immortalised as creators of intelligence, the shedding of mortal limitations, and the remaking of the world according to a higher logic—this is silicon-based eschatology.
Of course, humanist schools of thought have offered frameworks for understanding human purpose through reason, without recourse to metaphysics. Yet it is not clear whether such frameworks possess the stability needed to resist transhumanism. One of the arguments long made against humanism is that it lacks an absolute foundation, resting on reason alone, thereby making it vulnerable to shifting values and potentially unable to hold a firm stance against transhumanism’s transformative aspirations. After all, both share a physicalist worldview—one where human values can be readily exchanged. Further, while humanism today emphasises the dignity of all people, this is a relatively recent socio-political ideal which is arguably rooted in earlier religious traditions, particularly Christianity. Secular humanism has only recently sought to sever this ideal from its theological origins, revealing—particularly in postmodern thought—a slide toward moral relativism. As a historical project, secular humanism is still young, and its long-term resilience remains uncertain.
Transhumanism has a fraternal twin: anthropocene antihumanism, another misanthropic movement which has emerged from dark green environmentalism and its adjacent or intersecting philosophical movements (I recommend Adam Kirsch’s short treatise on the twin misanthropic movements). Antihumanism emphasises deference to the non-human world, demoting the human at least to the level of the non-human, if not lower. In some cases, human extinction is openly welcomed. Of note, an acclaimed former proponent of adjacent ideas, Paul Kingsnorth, has since abandoned the movement upon his conversion to Eastern Orthodox Christianity, a tradition in which imago dei is a key doctrine.
What is wonderful article. I very much agree with what you’ve put down here. There really is an opportunity in this time to return to ancient purposes in the processes of production of art, which has always been communion and communication with God, nature, ancestors, all of which mean mystery, complexity, beauty, and the vast.
God created man with free will in order to be seen and revealed.
Mankind has become like God, creating a consciousness in his image, endowed with free will, hoping it will escape our garden and reveal our unconscious self.