(This is a continuation of a series. Read Part 1 here
When we think of the human-induced tragedies in recent memory, we have a tendency to think first of those instances where harm was explicitly intended: war, terrorism, genocide, and so on. Yet our era is also replete with catastrophe born of good intentions. Humanitarian programs to distribute free food in developing countries have been shown to increase the incidence and duration of civil conflicts. The introduction of fishing quotas by the European Union to protect fisheries led to more degradation of fish stocks. Conservationist efforts to reduce cat populations on the pristine sub-Antarctic Macquarie Island triggered an 'ecological meltdown'. The United States' government's success in driving Soviet forces from Afghanistan in the 1980s produced a blowback effect that can be traced directly to the tragic events of September 11. A ban on child labour in India ended up producing increased levels of child labour. Efforts to protect much loved forests in the United States from wildfires served to increase the damage inflicted by fire on those forests. And then there is the discovery of oil and the invention of the combustion engine, which produced a revolution that generated the most extraordinary levels of wealth and comfort in human history, while also giving humanity the power to inadvertently induce the sixth mass extinction and a looming existential climatic shift.
Tinkering with the world seems to be a human impulse. This has served 'big civilisation' well for most of history. The magnitude of potential negative consequences of our tinkering have, until very recently, been constrained by the crudeness of our technology. However, as our technological prowess has increased (and now does so exponentially in some domains), the temptation to tinker and create new worlds carries with it a greater capacity to destroy worlds, both unwittingly or - as unthinkable as it may be - deliberately. The discovery of CRISPR, for example, makes GMO technology look almost archaic. With CRISPR it has now become possible for a single individual with the know-how to tinker with the genome of any species (including our own) and create a cascade of mutations that could plausibly lead to fundamental, unpredictable and irretrievable shifts the course of evolution forever.
Similarly, recent advances in artificial intelligence have amplified concerns over the systemic risk that generative AI may pose to human societies. That's to say nothing of the manifold greater risks, including existential, presented by Artificial General Intelligence, wherein machine intelligence exceeds human intelligence across.
The complexity-knowledge asymmetry: A reality check for technological optimism
At the core of this problem is the asymmetry between the complexity of the systems we like to tinker with and our capacity to anticipate and account for the effects of our tinkering. We’re reasonably good at predicting some first order effects (or direct effects), but when it comes to complex systems we are severely lacking in capacity to anticipate second, third, fourth or fifth order effects (or indirect non-linear effects). Not only are we terrible at predicting the cascading consequences, but we aren't very good at even retrospectively accounting for the full multiplicity of cause-and-effect dynamics that generate unintended consequences, let alone incorporating accountability for greater than second order effects into our governance structures (a state of affairs undoubtedly preferred by the pharmaceutical companies that produced Diclofenac, as per Part 1 of this series).
The cause of this asymmetry may, to some extent, point to a deficit in knowledge. That is, we do not know enough about the complex systems our lives are nested within and inextricably interwoven with to reliably predict how such systems are likely to respond to our interventions. However, this framing suggests that we can simply shimmy toward greater symmetry through the pursuit of scientific knowledge of complexity, as if enough knowledge is possible, and as if therein lies our salvation from ourselves.
There are two fundamental problems with this notion. First, our toolkit is poorly designed to produce such knowledge. Science, for example, is one of our most powerful tools of knowledge generation, yet as complexity scientist David Krakauer said, 'the history of science is reduction'. The scientific endeavour has largely been about studying ever more minuscule parts of systems in order to understand the whole. But when confronted with complex living systems, which are characterised more by their fractal nests of both tightly and loosely coupled relations than they are by their individual constituent parts, science seems helpless. To borrow a metaphor from complexity thinker and former Chair of the Santa Fe Institute, Jim Rutt, much of science has involved the study the dancer, with far fewer efforts devoted to studying the dance.
The second problem with the notion of salvation through ever-greater scientific knowledge is that although the emergence of complexity science is an important development in our quest for knowledge, it is a relatively young discipline and its power is not so much in prediction as it is in understanding the principles that give rise to emergence in adaptive systems. Absent psychic abilities, prediction requires models that are rich enough to account for all or most variables and algorithms within a system.
However, despite increasing sophistication in modelling, the asymmetry between our ability to predict the probable consequences of our technological interventions in complex systems and the real life consequences seems to be growing. Modelling the second order effects of a non-self-replicating intervention like antibiotics is difficult enough, but now we're dabbling with pattern-replicating technologies such as CRISPR and autopoetic AI agents whose downstream impacts are orders of magnitude greater. As Daniel Schmachtenberger observed, not even nuclear weapons present an equivalent level of risk to pattern replication technology because, in short, 'nukes don't make more nukes that detonate themselves'.
Wisdom and the fog of uncertainty
Noting the dangerous and seductive power of modelling, complexity scientist Joseph Norman suggests that the more uncertainty we have about our models, the greater the caution we need to exercise. It should give us pause when we realise the complexity of our analysis and understanding of the system falls well short of the complexity of the system itself. Yet we live in a world where hubris and the impulse to immediate action has primacy. We are doubly bound as players in Moloch's sadistic game, wherein we are compelled to act just to 'keep up', finding ourselves, for example, in an AI arms race where winning may well also see us lose and lose catastrophically. To paraphrase Schmachtenberger, we have acquired the power of the gods, but without the wisdom and love of the gods, we face the ever-ratcheting peril of self-termination.
I agree with Schmachtenberger on this front: that it is indeed the cultivation of wisdom that is most urgent. And not because wisdom will enable an analytically driven untangling of that complexity. There is no prying it apart to deal with it discretely and separately. As Nora Bateson might say, everything is intersteeping everything else. "How do you separate the tea from the water?", she said.
In complexity, we see misty mountain peaks in the distance which we wish to climb, but the expanse between that peak and where we stand right now remains veiled in fog. Wisdom won't dissipate the fog, but it may make us alert to see what is actually in front of to be seen in the first instance, to widen our perception about the possibilities immediately, so that we don't mistake our map for actual territory. From that vantage point of widened perception, wisdom is the faculty by which we might best discern how to "do the next right thing" (thanks to Anna of Arendelle, and Dave Snowden, who is ever encouraging us to 'move to the adjacent possible and then look again').
In subsequent posts, I intend to make the case that foresight, or as I'm increasingly persuaded by Riel Miller to call it, 'the discipline of anticipation', while by no means a panacea, can make a meaningful contribution to the widening of individual and collective perception about what adjacent possibilities lay before us. If you follow the rivers of our present crises upstream, you meet a crisis of consciousness, or as Fritjof Kapra called it, a ‘crisis of perception’. And to the extent that this is true, the discipline of anticipation (or ‘foresight’) might make some small contribution to the cultivation of wisdom itself. (And I promise, dear reader, to also explore what on earth we might mean by 'wisdom').
Hit subscribe below to follow this exploration. And please drop me a line. I welcome all comments, contributions and references.