OpenAI post-drama: Change & Fear

Moisés Cabello
4 min readNov 23, 2023

--

OpenAI, the organization behind ChatGPT, has experienced an internal storm this past week that I will not detail here, as it has been thoroughly scrutinized. Instead, I would like to focus on what has changed due to this crisis.

Change

In the world of technology, OpenAI has stood out as a leader in the development of general-purpose AI and has been a key provider of GPT technology for Microsoft products. The leadership crisis that shook OpenAI this week has been a reflection of the crucial role that artificial intelligence has acquired in our society. The organization, which came close to disbanding due to internal differences exacerbated by external influences, has demonstrated that AI has become an essential strategic element, almost as important as energy for the near future.

Image The conflict at OpenAI intensified due to (among other things) the figure of Helen Toner, whose ties to China and suggestions to dismantle the company according to its original mission have caused a stir on social media. Although Toner is no longer part of the board, her impact has been significant in the restructuring of the organization.

In response to the crisis, the composition of OpenAI’s board of directors has changed. The inclusion of experts with experience in managing technology companies promises to bring new stability. This change is likely to soon include the presence of an observer from Microsoft, which owns 49% of the company, whose investment was shaken in recent days. But the addition that has caught my attention the most is that of Larry Summers, former U.S. Treasury Secretary. His presence is seen by many as a link between Washington and OpenAI, similar to how Condoleezza Rice’s entry into Dropbox’s board was viewed during the Snowden scandal years.

This change in the administration of OpenAI reflects a significant evolution in the organization. It is no longer just a non-profit entity focused on pure innovation. The new OpenAI emerges as an undisputed leader in the field of general AI, positioning itself as a strategic pillar in American commerce and technology. This change not only reflects the company’s profitability ambitions (many people are focusing only on the transition from non-profit to profit) but also its growing relevance in a global context where AI will be more important than ever. For instance, the Japanese government (a country with a powerful cultural industry) recently decided to grant immunity from copyright infringements to its AI development professionals to avoid hindering their growth. U.S. courts are following a similar path. We are not just talking about Internet services, but something much more transcendental.

The transformation of OpenAI is also a mirror of the evolution of the AI industry as a whole. The speed at which this organization has gone from being an innovative, small, young startup to a dominant player on the global stage is astonishing. In the world of AI, everything moves at fast-forward, including vital corporate phases.

Fear

Given this importance, we need to reconsider some things. For example, the ease with which certain sectors have assimilated apocalyptic narratives about Artificial Intelligence.

Image The AI that is currently being born is not just anything. It is not called the “Fourth Industrial Revolution” for nothing. It can have a huge impact on automation and employment. It is normal to observe its progress very closely.

But there are ridiculous clichés from cheap science fiction or shapeless fears bordering on cosmic horror about the future of Artificial Intelligence, completely disconnected from the reality of its development. I do not ignore that sometimes these narratives have come from within: Sam Altman himself got intense about AI when he wanted to scare legislators to torpedo the development of competition or open-source initiatives, while his company grew.

But those who wield these terror narratives are unable to draw a line of dots that connects the present to that terrible future. Instead of embracing the most ridiculous extreme of AI doomerism (or the technophilic naivety of people like Marc Andreessen, also) there is a better plan: to monitor the bumps in the road as we move forward rather than start putting barriers to the good for literary fears of the bad. In a geopolitical context where third powers have invested notable efforts in, for example, dissuading the development of nuclear energy in the West through apocalyptic and radioactive narratives in friendly media, NGOs, etc., common sense tells us that we could be seeing the same in the field of Artificial Intelligence development.

I hope our European politicians and regulators can also see this.

--

--

Moisés Cabello
Moisés Cabello

Written by Moisés Cabello

IT teacher interested in the future.

No responses yet