Today, the Future of Life Institute published an open letter calling on "AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". The letter has been signed by a few people you might know, including Elon Musk, Yuval Noah Harari, Steve Wozniak, a host of AI and machine learning luminaries and many more.
I thought of the Amish when I read the letter. It’s a misconception that the Amish are Luddites who eschew all modern technology. They don’t. They just have a very strong and deliberate vetting mechanism to evaluate cost/benefit of technologies they accept into their communities and households.
In particular, they evaluate the extent to which any prospective technology will support or undermine the things they value above all: in their case, the things they most want to support are family and faith. In Amish country in Lancaster Pennsylvania, for instance, "when a new technology comes along, its effect on the church and community is examined." They may, for example, reject smart phones because of the moral dangers they perceive (perhaps quite rightly) to be associated with the likes of continuous interest connectivity, social media, etc. But some Amish communities are fine with ‘dumb phones’, which enable staying in touch with family without turning family members into screen-distracted scroll-addicts at the dinner table.
I thought of the Amish again when I saw one of Tyler Cowen’s recent posts (see here), which had a familiar “eh, mass disruption from AI is inevitable so let’s just plough on” vibe to it. I don’t buy Cowen’s conclusion that despite the risks, we should “take the plunge” or indeed that “we already have taken the plunge”. I'd hazard a guess that the Amish might beg to differ also, and may argue that they (and perhaps we also) have some agency to consciously exercise in this moment in history. We haven't taken the plunge. But in case you missed it, we're presently sprinting at an accelerating pace through thick fog and we're been warned there's a non-trivial chance that we're headed toward a precipice, one with consequences ranging from severely disruptive to existential.
A 2022 survey of AI and machine learning research experts found that nearly half of all respondents assessed there to be a 10 percent chance that the long-run effect of advanced AI on humanity would be "extremely bad" (i.e. human extinction). But falling short of human extinction, there's still plenty of cause for pause. This week Goldman Sachs released a report assessing that generative AI could expose the equivalent of an estimated 300 million full-time jobs globally to automation. Some detractors would claim that AI doesn't destroy jobs, it just creates shifts, opportunities to transition to better, safer, easier jobs. Fine, and maybe there's an element of truth to this. But this also trivialises the human cost of said "transition", and the non-trivial issue of the shredding of social fabric as society undergoes such a shift. There are points in history where tech has shredded the social fabric on a far smaller scale than 300 million jobs, and over a longer time period than what we'd likely see with AI. For example, it took over 30 years for the US Rust Belt to decline from 43 percent of all US jobs to 28 percent between the 1950s and 1980s.
Francis Fukuyama called this episode of deindustrialisation and manufacturing decline the Great Disruption, with affected states experiencing a population exodus during this period, and well into the 21st century. The most affected states also happened to demonstrate "uncharacteristic voting" patterns in the 2016 US presidential election, and turned out to be decisive in the victory of Donald Trump. Pfff... that's got to be coincidence, right?
Would it be difficult to pause? You bet. Is it complicated because the accelerating development of AI has geopolitical implications? Yep. And won't the market just run roughshod over any such endeavour anyway? Perhaps, but we’ve become much too comfortable with the notion that we’re all subjects of inevitable market forces (US President Teddy Roosevelt didn't think the market should trump a broader societal good). And anyway, is a global pause even possible in the face of an emerging risk of this magnitude? Sure it is, we proved that in early 2020, didn’t we?
Folks are drawing parallels to previous tech and the false alarms of tech-inflicted harm. I'm not sure there are any neat historical parallels to draw though. We're in entirely new territory and the stakes are high. It's hard to think clearly about the future at the best of times, let alone attempting to extrapolate the implications of autopoetic artificial intelligence whose downstream impacts are likely orders of magnitude greater than anything we've seen before. As Daniel Schmachtenberger observed, not even nuclear weapons present an equivalent level of risk to pattern replication technology because, in short, 'nukes don't make more nukes that detonate themselves'.
I've signed the open letter for a 6 month pause. Make no mistake, I'm enamoured by the technology. I use it (often) and I see it's benefits are without precedent, but it also seems wise to at least slow to a walk, at least until the fog clears a little.
I loved this article James. Very thought provoking