Discover more from The Owner's Guide to the Future
AI Has Achieved Escape Velocity
No governance mechanism can contain it nor stop the progress towards AGI
Yesterday was ChatGPT’s first birthday.
In just one year, ChatGPT achieved something remarkable. The chatbot app introduced hundreds of millions of people around the world to the novelty and utility of generative artificial intelligence.
More importantly, it gave people their first experience of conversing with a computer in natural language. Not simple voice commands, like Siri or Alexa, but rather a genuine conversation that can be sustained over a long, meaningful dialog. You can communicate with a machine now.
The rapid consumer adoption of ChatGPT came as a surprise. It quickly transformed OpenAI from an obscure non-profit research lab into the publisher of the fastest-growing consumer software application in history. This put significant stress on the organization, a non-profit that was suddenly tasked with managing a for-profit subsidiary whose valuation ballooned to an estimated $85 billion in less than a year.
ChatGPT’s astonishing success also elevated OpenAI’s CEO Sam Altman to global prominence as both the herald of the new era of conversational computing and the Cassandra of artificial general intelligence (AGI) gone rogue. Altman spent several weeks this year conferring with presidents, prime ministers and other national leaders about the perils of uncontrolled AI. Unusually, for a tech CEO, Altman called for preemptive regulation by national governments to contain AI.
The phenomenal and unexpected success of ChatGPT caused tremendous growing pains inside of OpenAI, spurring intense internal debates about the core mission of AI safety and the trajectory of the for-profit business that was devouring resources required by the research team.
The system was showing signs of strain: ChatGPT went entirely offline for one day in mid-November, and new user registration for the ChatGPT Plus Tier (powered by GPT4) was suspended soon thereafter.
On top of these growing pains, Altman continued to push for more commercialization. At OpenAI’s Developer Day on November 6, Altman announced a new version of GPT4 called Turbo, the introduction of proto-agents dubbed “GPTs”, and plans for a GPT store where end users could post and sell their customized applets built on top of GPT4. This last initiative threatened to consume even more of the compute resources that the security and research teams needed.
These internal conflicts culminated in dramatic showdown when Altman was abruptly fired by the board of directors on November 17. Minutes later, OpenAI president Greg Brockman was removed from the board; he immediately quit, followed by three senior OpenAI researchers. Within 48 hours, nearly the entire workforce threatened to quit.
In one weekend, OpenAI went from the leading firm in the entire AI industry with a value of $86 billion to the brink of implosion. If cooler heads had not prevailed, it is possible that the company would have collapsed entirely within a week.
That did not happen. Instead, the investors in the for-profit subsidy intervened. Just a few days later, with the support of nearly the entire workforce and the investors in the for-profit entity, Altman mounted a successful campaign to return. He ousted the opposing board members.
The boardroom showdown made the front page in newspapers around the world. Tech journalists were live blogging the blow-by-blow events over the weekend.
In less than a week, the public learned a great deal about OpenAI’s internal structures and tensions because the boardroom soap opera played out in full view of a global audience.
It’s a safe guess that, if ChatGPT did not exist, the boardroom struggle at OpenAI would have escaped the notice of most news organizations. Thanks to the phenomenal popularity of the chatbot, and also to Altman’s effort to generate broad awareness about the risks and benefits of AI, the whole world tuned in to follow the five day drama.
That’s no accident. As I’ve written in this newsletter previously, the AI companies are competing fiercely under phenomenal pressure. Some sort of incident or mishap was bound to occur given the rapid proliferation of AI technologies and apps, though nobody would have predicted this exact sequence of events.
Besides, this is not merely a case of rubbernecking at a smoldering roadside car wreck. Given the high stakes, the whole world is correct to be concerned.
We all have a significant stake in the outcome.
What happens at OpenAI (and at the other companies competing to achieve artificial general intelligence) will certainly affect every person on the planet. That sounds like hyperbole. Trust me, if OpenAI achieves their goal of AGI, it is not.
We are all on the receiving end of AI, whether we welcome it or not. Therefore we deserve to have some voice about how these technologies function, and how the companies building them are governed. The OpenAI drama was a wakeup call.
I believe that three things deserve careful consideration.
The development of ever-more advanced AI is already beyond the control of any one person or organization. Long before any lab has achieved anything remotely like AGI, this technology has already become unstoppable. The OpenAI drama demonstrates that clearly, as I will set forth below.
The governance of AI organizations remains poorly defined and represents the greatest potential point of failure. The consequences of failed governance could be catastrophic.
There are some very, very weird dynamics in the AI industry that the mainstream press still does not cover. Which is itself kind of weird.
In today’s letter I can only skim these topics. They remain knotty and complex. They won’t be will be resolved here, or anywhere else, any time soon. And that means we will have another opportunity to revisit them again in the future.
For today, let’s break each item down, one by one.
AI has achieved escape velocity.
My claim: The development of ever-more advanced AI is now beyond the control of any one person or organization. Long before any lab has achieved anything remotely like AGI, progress towards this technology has already become unstoppable.
But how can that possibly be true?
The answer is about the differential between the slowness of representative democracies and the increasing speed of private for-profit industry.
My argument: In the span of one year, generative AI systems like GPT 4 and Stable Diffusion have improved at a blistering rate, outstripping the ability of any regulatory bodies to keep abreast of the advances.
Even the most powerful nation on the planet lacks the ability to check the pace or trajectory of AI development.
The US Congress is gridlocked by incompetent and dishonest charlatans who have no intention of passing any meaningful legislation. Even if they were capable of drafting an effective law to govern AI development, which they manifestly are not, nothing will happen in Congress until after the 2024 election. And then only if the Republicans lose control of the House of Representatives. Until then they will cosplay pointless investigations and phony impeachment procedures, accomplishing nothing. The current leadership lacks the ability to draft and pass any legislation that will make it through a Senate vote and get signed into law by the President.
A dysfunctional Congress means we will have no new laws to govern artificial intelligence. Awkward timing when there is an AI arms race underway.
That’s why the White House issued an Executive Order to govern AI. Political opponents grumble about presidential rule by executive decree, but that’s what Presidents are obliged to do when Congress cannot or will not do its job. However, the US President is limited to working within existing laws: very few of those laws address the most significant issues in artificial intelligence. It’s a start, but there’s no way that President Biden’s executive order will achieve anything we might consider adequate regulation of AI development without new laws from Congress.
Meanwhile, on the same day that Altman was terminated, the UK government revealed that it will not rush to pass new AI laws because of the risk that premature regulation might stifle innovation.
Even the Europeans seem to be losing their zeal to implement a thoughtful framework for AI regulation. That may be because the Europeans have wised up.
When Europe opts for tech regulation, they get unemployment. Meanwhile the USA passes no meaningful regulation and enjoys record levels of employment. Imposing AI regulation will cost Europe jobs and prosperity; tech companies will merely set up operations in other jurisdictions.
So it’s a Mexican standoff. No nation wants to go first. No nation wants to stifle innovation or kill jobs. This means no Western nation is likely to impose meaningful constraints on the rapid progress in artificial intelligence any time soon.
Perhaps an effective regulatory approach would consist of a coordinated effort to harmonize laws across all advanced countries, thereby preventing jurisdiction-shopping, the same way that tax laws were aligned to prevent corporations from shopping for low-tax domiciles. But that would require sober legislators and thoughtful leadership which the United States clearly lacks.
There is no chance this will occur in the next 18 months because of elections and other distractions.
18 months.
Remember, ChatGPT is only 1 year old today. GPT4 is less than 9 months old.
In a year and a half, how much more powerful will these systems become?
Legislature may move slowly but the pace of AI research will only increase. And the governance systems for private companies are no match for the irresistible lure of AGI.
There is no governance structure built to withstand this pressure
My claim: The governance of AI organizations remains poorly defined and represents the greatest potential point of failure.
My narrative to support this claim: Journalists and pundits professed to be flabbergasted when the board of directors at OpenAI decided to terminate Sam Altman. During the first 24 hours after he was ousted, journalists floated speculative trial balloons: maybe there was financial mischief, or maybe some personal scandal, or some sort of devastating security breach. No, no and no.
The following day, Brad Lightcap, the COO of OpenAI, dispelled these rumors: “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices.”
Then, in the fever swamp of X, rumors swirled about Q*, which was supposedly a breakthrough on the path to AGI. But no, Meta’s Yann LeCun shut down that avenue of speculation immediately. Q* is not the reason why Sam was fired.
In fact, the board did provide a reason for firing Altman. But they fumbled the communication disastrously. The board used the odd phrase that Altman “was not consistently candid in his communications” which is a good example of the kind of legalistic gobbledygook generated by a human attorney hoping to reduce the risk of a defamation lawsuit. The oddness of that phrase, coupled with the board’s refusal to comment further, left a news vacuum that invited a speculative frenzy.
Why would a board fire a charismatic CEO who had succeeded in elevating the company to global prominence, striking a deal with the biggest software company in the world, and raising the valuation to dizzying levels?
If it were a normal board of directors, tasked with the fiduciary obligation of guarding the interests of investors, they would be thrilled by Altman’s performance.
But OpenAI’s board was given a different mandate. They had zero obligation to investors in the for-profit entity. Their obligation was to preserve the mission of the non-profit parent company, which is to ensure that artificial general intelligence is beneficial for humanity.
This structure was no secret. It was widely known. As Altman himself told many reporters in 2020, the board could fire him at any time if they disagreed with his approach. And they clearly did.
Though unified in mission and purpose, the staff and leadership at OpenAI is riven with disagreement about how to accomplish that purpose, and how swiftly, and whether a more cautious approach might be preferable.
As CEO, Altman could not afford to let those considerations slow him from securing the resources the company needed. He had to anticipate the phenomenal strain placed on the company’s infrastructure by a rapidly-growing customer base that approaches billions of users. Satisfying this demand required heroic amounts of capital to be invested into constantly-expanding computing resources.
Altman was working like the fireman in a railroad engine, shoveling money like coal into the furnace of consumption as fast as possible. In the hyper-competitive AI field, slowing down even a little bit could mean falling behind rivals and losing the lead, which in turn would mean losing their first-to-market premium, which would diminish the startup’s appeal to investors and talent, and that would ultimately reduce the amount of capital he could raise. In the CEO’s view, that downward spiral would jeopardize the mission of OpenAI. So he was moving as fast as possible.
It does not take much imagination, then, to envision a scenario where Altman was far more focused on cutting deals with sovereign wealth funds to secure the necessary billions, and far less focused on managing an endless academic debate with board members who harbored misgivings about the pace of growth.
So Altman bypassed the board. If you or I were in that situation, we’d probably do the same thing.
In sum, the board of directors was misaligned with the management.
For the board, the decision was clear. Altman had to go.
Even though it struck outsiders as bizarre and self-destructive, this was a case of the board of directors actually doing its job. They followed the rules that govern the non-profit parent organization quite literally.
That was no aberration or mistake. That was their duty. Remember, this structure was set up by Altman himself. He knew it was necessary to establish a governing system that could prevent the company from going out of control, particularly under pressure from investors.
In retrospect, it’s easy to see how he got fired. A young man in a hurry, moving fast, striking big deals with big investors, not pausing to wrangle or cajole his board members. He took the directors for granted, assuming he could persuade them later, after he closed yet another deal at a nosebleed valuation.
It’s easy to see why the board members were concerned. Presumably, from their viewpoint, the young CEO had developed an unpleasant habit of jetting back to San Francisco to present them with one fait accompli after another. He was moving too fast for them. They felt like he was out of control. Or less than candid, to use their phrase.
What it means: the kooky unconventional governance mechanism actually worked. The board did exactly what they were supposed to do when something threatened to undermine the mission of the non-profit.
Except the decision didn’t stick.
The board made at least one huge blunder. They failed to meaningfully consult with the investors in advance. The board let OpenAI’s major investor and primary customer Microsoft know about the termination just one minute before the news was published.
That was profoundly unwise.
By all accounts, Satya Nadella was incandescent with rage when he heard that Altman and Brockman were gone. Not only was his $13 billion investment jeopardized; so too was Microsoft’s entire product line of Copilot-enhanced software. The Redmond giant was in the process of rolling out a deep integration with OpenAI’s GPT4 across its full range of software offerings.
There was never a possibility that Microsoft would allow the board’s decision to stand. Nadella took action immediately, intervening forcefully over the weekend of November 18 to bring the parties back together for reunification talks. But the talks failed because the board refused to relinquish authority or change the rules of governance.
Nadella did not stop there. His next move was to hijack the entire workforce. Under the terms of the Microsoft investment, Nadella already possesses a source-code license to OpenAI’s software, including the GPT4 large language model and weights. All he needed to do was collect the talent as they walked out the door.
Altman was the bait to lure the rest of the staff to Microsoft.
Altman and Brockman were “hired” by Microsoft, and about one day later, nearly 95% of the employees of OpenAI signed a letter to the board of directors informing them that they were about to resign, en masse.
To add pressure, Microsoft’s CTO posted an open invitation for any OpenAI employee to get a comparable job at a matching salary.
Within 48 hours of Altman’s termination, OpenAI had begun to reconstitute itself as a Microsoft entity, automatically.
The board of directors was beaten soundly, outfoxed by one of the most aggressive CEOs in the technology industry.
OpenAI’s board won the fight, but they lost the war.
If the board’s decision had stuck, OpenAI’s workforce would have melted into a dew that evaporated in the dawn light and reappeared the following day intact inside a new Microsoft lab. And this time, the new organization would not be constrained by the high-minded limitations that govern the non-profit OpenAI. And that would have defeated the whole purpose of terminating Altman in the first place.
In the end, the board blinked.
Altman returned in triumph as CEO of OpenAI and dispatched the board members who attempted to fire him. He has already began to rebuild the company, freshly empowered with his new mandate.
What happens next? The composition of the new board of directors gives us a clue.
It’s telling that the members of the newly-constituted OpenAI board include:
Bret Taylor, the former co-CEO of Salesforce, who happens to be an expert in enterprise grade software products.
Larry Summers, the quintessential DC apparatchik and controversial former Treasury Secretary. Summers is most famous for pushing through an aggressive program of financial deregulation under President Clinton. This included the repeal of the Glass-Steagall Act in 1999 that led directly to the 2008 financial crisis. Suffice to say, Summers is no fan of regulation.
With Summers on the board of OpenAI, it’s a safe bet that Sam Altman will spend less time hobnobbing with foreign leaders, clamoring for regulation.
Microsoft also has a non-voting seat on the board, which fulfills Nadella’s vow that he will never be surprised again by OpenAI. (Note to self: never cross Satya Nadella).
In sum, the money won. The “go faster” crowd won. The commercialization crew won.
Who lost? The voices of caution. The “go slow” crowd.
This stunning denouement holds implications for the entire software industry. OpenAI’s competitors have no choice but to speed up development to stay in the race as the leader sheds the cautious governance structure and welcomes a new board of directors who stand for commercialization and deregulation.
The AI arms race is about to get bigger, faster, and more intense.
A couple of observations:
First, recall that the classic Hollywood nightmare “rise of the robots” scenario involves a super intelligence that escapes the lab and grows out of control. This is also precisely the scary scenario described in the opening chapter of Max Tegmark’s book about AI, Life 3.0.
Now consider what happened with OpenAI. This particular crew was nowhere near GAI and yet the nightmare scenario is exactly what occurred. It wasn’t the software that escaped the lab. It was the human beings who were developing the software. They threatened to escape the lab, bringing their AI skills and research with them.
The board thought they stopped the out-of-control growth by cutting off the head of the monster, but what happened is that the headless monster could simply walk out the front door and stitch itself back together in a new lab across the street.
What the OpenAI drama demonstrates is that any AI research team that is close enough to a breakthrough towards general intelligence cannot be stopped by any known form of corporate governance. The money always wins.
An “empowered board” simply is not enough to stop the process because a competitor will swoop down and hire the whole team and set them up in a lab.
I am not saying that artificial general intelligence is imminent. I don’t believe that. But I do believe it is now inevitable, not for any scientific reason. I am convinced it is coming because of commercial and competitive dynamics.
The arms race is out of control, and there is no regulation or board structure to govern it properly.
So when you read pundits who muse about AI “alignment” and “governance”, understand that there is no known mechanism that can slow this process down.
Even if we could somehow achieve consensus in the United States to proceed with caution, it would not matter, because the moment US tech firms decide slow down, every rival company in other nations with adequate infrastructure and looser regulation will “pull a Nadella” by signing up the talent and airlifting the entire team to an unregulated jurisdiction.
Second, no investor in AI startups and no CEO of an AI startup will ever again consider encumbering themselves with the clumsy jerry-rigged governance structure that OpenAI adopted. That is over and done with.
In retrospect, this structure was creative, well-intentioned and maybe even honorable, but it was insufficient. It failed utterly and completely to achieve what it was intended to do.
For a lesser company with lesser leadership, the drama could have been a fatal blow, completely destroying shareholder value. It is a testament to Sam Altman’s leadership that OpenAI did not lose a single employee during the drama.
We will never see that kind of governance structure again. No investor will ever be caught off guard by a board that bears no fiduciary obligation. From now on, every investor will follow Satya Nadella’s vow to never be surprised again. The people with the money will ensure that they get what they paid for.
There’s more. Customers will take steps to mitigate risk, too. No company is going to remain dependent upon OpenAI as a sole vendor. Especially Microsoft.
Corporations will hedge their bets by experimenting with multiple AI vendors, and probably adopt open source, too. Even individual freelancers and sole proprietors are evaluating alternatives after GPT4 crashed two weeks ago. Nobody who is paying attention will build a dependency on an ungovernable AI startup into their own business operations.
In the short run, risk mitigation will be good for innovation because it will spread the revenue to more startup companies. But in the longer term, diversification will accelerate the progress towards artificial general intelligence.
In sum, investment into AGI is accelerating at the exact moment when national governments are stalling. This gap will increase in the next 18 months.
The AI industry is weird. Really weird.
My final claim is that there are some awfully weird dynamics afoot in the AI industry. The mainstream press still does not cover this story sufficiently. Many months ago, in the earliest editions of this newsletter, I highlighted some of those strange dynamics. You can read those articles here, here, here, here, here and here.
It turns out that I was just scratching the surface. The OpenAI fracas is another example of the pattern of weirdness that characterizes this industry.
I have much more to say about this topic, because it’s funny and strange and possibly important, but it’s mighty late at night here and this newsletter is already way too long.
So I’ll just share this link to a truly snide perspective on the OpenAI debacle that highlights the weird cult of AI Doomers. If you are not familiar with this quasi-religious conviction among AI researchers, you might find this snark amusing.
I’ll see you in the near future!
The AI industry definitely seems to be in tumultuous waters. One wonders if eventually a coordinated global effort will emerge to address these challenges, but in the meantime, we remain bound to observe this interesting exercise in tangled governance dynamics. Exciting times, aren't they?