Why the AI petition flopped, and why Sam Altman says he is afraid.
The final installment in a series of articles about the weird aspects of the AI arms race.
This is the fifth in a series of articles that considers the rapid rollout of consumer-facing artificial intelligence apps.
The race to dominate the next phase of computing is disorienting for a number of reasons. In this series of articles, we break those down in order to peer beyond the fog of AI hype .
In previous installments, we considered the weirdness of accelerating change, the rapid increase in GPU performance as an enabler of mass-scale AI deployments, and the bizarre emergent aspects of AI apps built upon large language models, including occasional split personalities and expressions of malign intent (!). Yesterday’s article considered how the arms race between Microsoft and Alphabet intensifies these odd dynamics. If you are new to this newsletter, you can find the previous articles in the archive for the Owner’s Guide To The Future.
Weird Thing #8:
The petition failed to achieve a pause, but it did manage to start a great big pillow fight for nerds.
Given the gold rush mentality, the intense competition, and the uncertainty about the reliability of the technology, it was inevitable that some objections to the rapid deployment of consumer-facing artificial intelligence apps would be voiced.
That’s why no one was surprised when a petition demanding a six-month moratorium on further AI research was published on March 22.
What was surprising were the signatures. This is not a letter drafted by Luddites or ignoramuses. Some of the signatories are among the most respected names in the field of artificial intelligence.
The letter was signed by leading AI researchers such as Demis Hassabis, Mustafa Suleyman, Stuart Russell and Max Tegmark, as well as AI safety and ethics experts Gary Marcus, Tristan Harris and Yoshua Benigo; and some famous technologists including Tesla CEO Elon Musk and Apple founder Steve Wozniak.
The petition, entitled “Pause Giant AI Experiments: An Open Letter”, was published on the Future of Life website. It read in part:
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.”
When the letter was published, there were more than 1000 signatures. Today, at the time of this writing, there are more than 26,000 signatures.
The sentiment was picked up and echoed in mainstream media. Even Peggy Noonan, whose only experience in the field of artificial intelligence was being a speechwriter for Ronald Reagan, chimed in. Writing in the Wall Street Journal, her take on the AI arms race is that Silicon Valley tech giants lack the moral fiber and ethical grounding to serve as stewards of this technology. Noonan said that a six month moratorium is not enough.
Almost immediately, blowback against the petition began.
AI heavyweights Andrew Ng, the founder of DeepLearning.AI, and Yann LeCun, Meta’s VP and Chief AI Scientist, derided the letter in a video. Ng said that the proposed pause could cause “significant harm.” LeCun likened the pause as “inventing the seatbelt before the car.”
Meta COO Andrew Bosworth deemed calls for a moratorium unrealistic. Investor Bill Ackman lamented that a pause would only give “the bad guys” time to catch up.
Bill Gates said that a 6-month pause would be unenforceable: “"I don't really understand who they're saying could stop, and would every country in the world agree to stop, and why to stop," he told Reuters.
Some researchers refused to sign it. Some made fun of it. And some pundits used the letter as the perfect occasion to tee up an opposing argument: “Let’s speed up AI.”
Within days, some signatories began to backpedal, distancing themselves from the letter. Others grew defensive. Gary Marcus posted a defense on his blog, invoking the justification he provided to the New York Times: “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”
The letter had the desired effect of triggering a broad discussion about the looming impact on society of widespread artificial intelligence. Unfortunately, this chatter accomplished little more than spreading even more fear and uncertainty.
On March 29, the decision theorist and AI alignment theorist Eliezer Yudkowsky decided to crank up the rhetorical heat a few more degrees by publishing his own screed on Time Magazine’s web site, entitled “Pausing AI Development Is Not Enough. We Need to Shut It All Down.”
Arguing that the letter did not go far enough, Yudkowsky likened artificial intelligence to nuclear weapons and shared a wild vision of a humanity-wide extinction event.
Some excerpts from Yudkowsky’s complaint:
The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.
In an interview conducted by podcaster Lex Fridman, Yudkowsky expanded on his apocalyptic vision of a species-wide extinction event as the most probable outcome of superhuman artificial intelligence. Yudkowsky’s full “list of AGI lethalities” can be found here.
Others rang the alarm bell. This vlogger compared the introduction of artificial intelligence into human society to the destructive impact caused by an invasive species in a pristine ecosystem.
What are we to make of this apocalyptic vision? How seriously should we take these warnings?
One aspect of the letter merits provides a clue. The organization that published it, The Future of Life Institute, is a unusual organization. Managed by AI researcher Max Tegmark, FLI is affiliated with a small group of scholars at Oxford led by the Swedish philosopher Nick Bostrom, the founder of the Future of Humanity Institute. It is financed by multimillionaire Skype founder Jaan Tallinn. Elon Musk contributed $10 million to the cause and subsequently introduced Bostrom and the concept of AGI to OpenAI CEO Sam Altman.
What connects these people together is a philosophy called Longtermism which is unknown by most people but, within tech circles, has some devoted adherents. This philosophy is also aligned with the “effective altruism” movement, whose most prominent supporter was the discredited erstwhile billionaire crypto whizkid Sam Bankman Fried. And it is somehow tangled up with other Silicon Valley cult-like faiths such as transhumanism, digital humans and mass migration to Mars.
On the surface, longtermism is about considering the value of future human lives in the context of today’s decision-making. Adherents believe that society’s greatest obligation is to maximize the possibility that future generations will enjoy life, liberty, and agency. That’s why they place top priority on eliminating any potentially catastrophic extinction-level event that might put humanity in jeopardy.
No matter how small the probability of such an event, to the longtermist, the merest prospect of existential risk must take priority over any other kind of natural or manmade disaster. All other risks, such as the very current threats of wars, pandemics, and climate change, can be survived by at least some portion of humanity: to a longtermist, such survivability means that these risks are far less important. Anything short of extinction is a low priority problem.
The flaw in this reasoning, according to critics, is that it leaves no room for ameliorating the current situation. Resources are prioritized for hypothetical low-probability future events instead of dealing with the very real issues of inequality, poverty, disease and misfortune that are already here, and that are amplified by the algorithms and AI already deployed today.
That is why some critics consider Longtermism “the world’s most dangerous secular credo.”
Among the most vociferous critics of the letter was Timnit Gebru, the AI safety researcher who was clumsily fired by Alphabet after publishing a research paper that was sharply critical of LLMs. (Alphabet claimed Gebru’s, did not meet their standards for publication.)
According to Gebru, her research provided the basis for the reasoning in the FLI letter, but longtermists at FLI co-opted her argument and drove it to radically different conclusions that diverged sharply from hers.
Gebru, and her colleagues at the DAIR Institute, are committed to exposing existing bias and unfairness in artificial intelligence today. In DAIR’s response to the petition, they wrote:
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence." Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.
While we are not surprised to see this type of letter from a longtermist organization like the Future of Life Institute, which is generally aligned with a vision of the future in which we become radically enhanced posthumans, colonize space, and create trillions of digital people, we are dismayed to see the number of computing professionals who have signed this letter, and the positive media coverage it has received. It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a "flourishing" or "potentially catastrophic" future [1]. Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media. This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders.
To summarize, far from fostering consensus or even clarifying matters, the petition from FLI exposed deep divides among the AI safety research community, and it deepened existing differences among the groups that are critical of rapid deployment of LLMs.
Taken altogether, there are many valid criticisms of the rushed deployment, but the petition did nothing to galvanize broad support for any particular response or solution. It was a tactical failure.
The reaction among the public seemed to be a kind of stoic fatalism.
My personal impression is that general response to the AI arms race is a collective shrug, as if everyone outside of a small circle of tech companies has concluded:
“This is inevitable. There is nothing we can do to stop this, or even slow it down. Big Tech is pushing artificial intelligence on society. There are no meaningful checks on the power of the big technology companies. They are going to proceed no matter what. And even if the US government somehow managed to take some action to stop them, that would just confer an advantage onto China, or Russia, or some bad guys.”
Responding in Marginal Revolution, Tyler Cowen makes the case for AI inevitability and fatalism with two broad points:
1) For the past 70 years, American hegemony has stabilized much (not all) of the world.
2) There has been no truly radical technological change in that entire time.
According to Cowen, because of 1 and 2, generations have grown up in a world of relative stability (compared to previous eras) that he describes as “a bubble outside of history”.
Cowen then argues that #1 might unravel soon, depending upon geo-strategic events in Ukraine and Taiwan, and it appears that #2 is about to unravel, thanks to the advent of artificial intelligence which he argues, represents a truly major, transformational technological advance.
“But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum. We don’t know how to respond psychologically, or for that matter substantively. And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer). No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.”
Our previous stasis, as represented by my #1 and #2, is going to end anyway. We are going to face that radical uncertainty anyway. And probably pretty soon. So there is no “ongoing stasis” option on the table.
I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?” And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.
Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence?
Cowen’s article summarizes, in my view, the default position of American citizens and regulators. Namely, the conviction that everything is going to be destabilized anyway, no matter what we do, so we might as well choose our poison by launching a disruption that we initiated rather than waiting for the disruption we did not choose.
After decades of technological disruption, we seem to have internalized the logic that justifies the transgressions of Big Tech as progress: disruption is inevitable, there’s nothing we can do to stop it, and it might turn out to be good for us.
So why wait? Let’s go!
Weird Thing #9:
Stoking fear of AI in order to sell more AI
Perhaps the weirdest aspect of this pellmell rush towards AI supremacy is the fact that the CEOs of the leading companies constantly talk about their personal fear of AI.
That’s right, the people who make the product are publicly fretting about the product.
Of course, every tech CEO routinely recites an obligatory incantation about AI safety and ethical precautions (even if they seem to disregard those ideals when it comes to the AI arms race.).
But the CEO of OpenAI is going much further.
Sam Altman frequently expresses fear of his own product. “I’m a little bit scared” he said in an ABC interview, in which he talks about his reaction to Vladimir Putin’s views on AI, and expresses his own concerns that AI will lead to more cybercrime, mass-scale disinformation, and widespread unemployment.
These remarks recapitulate points made in a blog post by Altman dated 24 February 2023. In it, Altman says his company's mission is, "to ensure that artificial general intelligence [AGI] – AI systems that are generally smarter than humans – benefits all of humanity".
Altman also acknowledged the potential risks of hyperintelligent AI systems, citing "misuse, drastic accidents and societal disruption". The OpenAI CEO went on to detail his company's approach to mitigating those risks.
What’s the point? Is Altman creating a paper trail so that he can claim that OpenAI has always supported regulation when the lawsuits are filed? Is the company building a track record which can be used as a defense when executives are called to Congress for a wirebrushing by geriatric Senators with a feeble grasp of the technology but a keen instinct for a useful TV soundbite?
To some, this seems to be cynical exercise to sow fear and instill the belief in the inevitability of AGI, all designed to boost the perception of OpenAI as the leader in the segment and perhaps the only company equipped to deal with it safely.
As the Los Angeles Times noted, fear sells.
Altman’s message echoes the sales pitch of 1930s arms dealers who toured the capital cities of Europe between the world wars, stoking demand by inflaming the fear of ever-more-powerful weapons in rival hands, and thereby raising the odds of a conflict.
The product introduces a dangerous instability that can only be balanced by buying more of the product.
The merchants of AI are merchants of fear. It’s weaponized FOMO.
So that’s it for this first series on AI. I hope you’ve enjoyed it.
By breaking down the weirdness of the AI gold rush into separate aspects, I hope to provide context for understanding the rapid developments, and a way to separate vendor sales pitches from political debates and actual substantive progress. This clears a space to consider the future impact and risks.
In the next series of articles, we will explore some of the problematic aspects of the race to deploy AI at scale.
These include:
The publishing industry and copyright oligopoly seem to be focused on the wrong problems. Generative AI will have a very different impact on the publishing business.
The harvesting of open culture and the enclosure of the digital commons may diminish collective knowledge.
The likely impact on jobs is a) forced upskilling and b) perpetual reskilling. You may end up working for an AI.
The intensification of existing disparities and inequalities.
A dubious strategy to wedge a new digital platform into every business.
You are reading The Owner’s Guide to the Future. This article is the last of an opening series of articles titled “The Deep Weirdness of Artificial Intelligence.” We examined the rapid deployment of consumer-facing AI apps from several angles in an effort to peer past the hype and the fear mongering that have dominated recent media coverage.
We hope that you’ve found this useful in developing your own understanding of what is happening in the tech industry today. If you did, why not share it with a friend or colleague?
When does your story get happy? I'm working on a global ethical framework - not rules, but instead the FRAMEWORK for a global solution. My pitch is sparking some interest. We'll see if it gets traction and funding.