OpenAI’s CEO saga - what, why and what’s next?

Aušra Mažutavičienė
Written by
Aušra Mažutavičienė
on
December 4, 2023

Black Friday is here but this isn’t your average Black Friday message. This privacy newsletter is more dramatic than ever. We'll cover:

  1. OpenAI's change in management.
  2. News regarding AI guidelines from the French authorities.

Let's dive in.

1. The boardroom drama at OpenAI

Last Friday, OpenAI’s Board of Directors announced that Sam Altman would step down as CEO of OpenAI, the company behind the ChatGPT. The Board had "lost confidence" in his leadership, whatever that means.

More than any other figure, Altman has been the face of the new AI technology, thanks to the viral success of ChatGPT and his outspokenness in regards to AI regulation.

So the tech world was in shock!

The same was the case for most of OpenAI’s 770 employees who signed a letter to the board saying they might quit and join Microsoft unless all directors resigned and Altman was reinstated.  Microsoft had assured that there were jobs for all OpenAI employees, should they wish to jump ship and join them instead. So now, just five days later, Altman is returning as CEO.

The ambiguous language announcing Altman’s exit and return has sent the rumor mill flying. It's not clear whether tensions over the future direction of OpenAI contributed to this crisis. Some say that a key factor in Altman’s exit was the tension between Altman (who favored pushing AI development more aggressively) and members of the original OpenAI board (who wanted to move more cautiously).

Whatever the reason, many observers have called for greater clarity and assess that the whole debacle has been damaging to the OpenAI brand. Only time will tell what impact this saga will have on OpenAI and on AI development in general.

What does it mean?

Unlike Google, Facebook and other tech giants, OpenAI was not created to be a business. It was set up as a nonprofit by founders who hoped that it wouldn’t serve commercial interests.

While OpenAI later transitioned to a for-profit model, its controlling shareholder remains the nonprofit OpenAI Inc. and its Board of Directors. This unique structure made it possible for just four OpenAI board members - the company’s chief scientist, two outside tech entrepreneurs and an academic - to fire CEO Sam Altman. Now, with the return of Altman, the board will be restructured and a new member has already joined.

So will the new leadership prioritise commercial interest over the responsible use of AI? The last five days' chaos shows how much power rests with a few people. Power that impacts and sends ripples all over the world.

Let’s hope that AI regulation and guidelines will come into play soon to help and protect all.

uploads-ssl.webflow.com6336bb0932b68bd79863061864f1af3eda44407f7108976d_Community-blogs (1)

2. New AI guidelines

CNIL, the French Data Protection Authority, has published guidelines on the use of AI along with an action plan.

The guidelines consist of several resources for data controllers and processors who use AI in personal data processing. The resources provide directions on how to use AI for personal data processing without violating the GDPR. The action plan, however, is focused on the work that CNIL plans to do in the upcoming period, including:

  • Understanding the impact of AI.  
  • Guiding AI that protects personal data - the CNIL aims to help organizations create AI that respects personal data.
  • Supporting AI innovators in France and Europe.
  • Checking and overseeing AI systems. The CNIL is working on a tool to check AI systems. They'll also keep looking into complaints about AI, including generative AI.

So if you haven’t already, now is the time to start preparing for AI regulations and enforcement. Even if you don’t develop AI tools  or services, the responsible use of it is equally important.