OpenAI — Keep calm and avoid drama

Moisés Cabello
4 min readMay 19, 2024

The neverending drama of OpenAI

Recently, Ilya Sutskever, one of the co-founders of OpenAI and former board member, resigned. Sutskever was the godfather of an AI safety department that ended up being shut down. One of the employees of this department expressed their disappointment on Twitter, explaining that the company’s leadership, impatient to move fast, ended up defunding a safety team that advocated for a more cautious approach (and hope to have a 20% of OpenAI’s computing power for its research).

This has brought a significant string of alarming news about a company that is going full steam ahead towards Artificial General Intelligence (AGI) without looking back or taking safety into account, with the focus on the figure of Sam Altman. He partly deserves it: for two years he has been talking in an alarming and grandiloquent way about AI to scare regulators and harm new entrants, especially open source LLMs. The fact that people are now thinking humanity is at risk because of the news these days is also his fault.

I thought these apocalyptic narratives had died last year, but here they come again, even more disconnected from reality, because the landscape has changed a lot, in my opinion.

I will focus on two main points.

The first is that the mystique from a year ago about AGI being just around the corner or Skynet being locked up in some OpenAI basement has collapsed because the limits of scale have constrained the growth and evolution of LLMs. OpenAI has been sweating blood for over a year to make GPT-4 as computationally inexpensive as its predecessor model, and even while making notable progress, they still haven’t achieved it. AI still has a lot of room to grow in market share, so reducing consumption is vital. Even following the fantasy that they created Skynet, they wouldn’t be able to deploy it.

LLMs are becoming mundane and predictable commodities that require a slow and laborious optimization process to progress. We’ll have to see the leap that is made with GPT-5, but it will be an expensive model of which (even by paying a subscription) we will barely be able to use a few queries every X hours. Eventually it will be optimized. But we have to abandon this idea that every year is going to be like the transition from 2021 to 2022, when ChatGPT appeared.

The dangers of LLMs are evident, such as disinformation, impersonation, or the impact on certain jobs, but that list has barely changed in two years, and no longer justifies a department like this taking up 20% of computing power in a company the size of the current OpenAI, something that also doesn’t happen in any competing company. Both the grandiloquence and the consequent fears towards the advancement of AGI are not based on the reality of LLM development, but on the marketing language that often comes from the company itself.

This brings us to the second point. A year ago, the mystique of AGI revolved around OpenAI, which was far ahead of the competition. It was the only “authoritative voice” to speak about what AI would bring. This has also changed: its victory over open-source models is increasingly pyrrhic, and competing models like Claude Opus are manifestly superior in some aspects. So OpenAI is no longer the medium that speaks to us from the future. It is a very important player in the current AI ecosystem, but just that, one important player.

Something similar can be said about the focus on Sam Altman. Some see the Sutskever’s departure and that of some members of this department as a schism within the current OpenAI. If it’s a schism, it happened six months ago, not now, and it was resolved by expanding OpenAI’s board of directors with many more people, individuals with experience in various state or large business and organizational areas. OpenAI’s progress is more closely watched than ever from within. Altman was temporarily removed and reinstated after an internal investigation.

What’s happening these days is simply the last gasp of that autumn drama.

Let’s remember what OpenAI is today: a mid-sized company in its field, developing various leading generative AI models and tools based on them, which has lost a lot of competitive advantage compared to the multinationals it competes with, and which right now needs people to continue believing they are the runner carrying the torch that will ignite AGI.

In summary: the current narrative that OpenAI has or will have Skynet in the basement ready to be unleashed by a single impatient person (Sam Altman) is false on all points. LLMs aren’t about that, nor does it seem they will be for a while, nor is Sam Altman the sole person leading OpenAI, nor is OpenAI the only one that has something to say about the advancement of AI, although it benefits from being seen that way.

So,

Keep calm and avoid drama.

--

--