All Articles
AI5 min read21 November 2023

The Five Days That Almost Broke OpenAI

The OpenAI board fired Sam Altman on a Friday afternoon. Five days later he was back as CEO. The episode revealed more about AI governance than years of regulatory hearings had.

OpenAISam AltmanGovernanceAIMicrosoft

On Friday, November 17, 2023, the OpenAI board announced that it had removed Sam Altman as CEO. The announcement said the board had lost confidence in his ability to continue leading. No specific reasons were given. The same day, Greg Brockman, the company president, resigned in protest. By Sunday, hundreds of OpenAI employees had signed an open letter demanding the reinstatement of Altman and the resignation of the board, threatening to leave for Microsoft if their demands were not met.

By Wednesday, Altman was back as CEO. The board that had fired him had been substantially restructured. The episode had unfolded across five days that were unlike anything the technology industry had experienced before, and the consequences would shape how AI governance was discussed for years afterward.

The mechanics of how the firing happened mattered. OpenAI had an unusual corporate structure. The non-profit OpenAI Inc. controlled the for-profit OpenAI Global. The board of the non-profit had a fiduciary duty to the non-profit’s mission rather than to shareholders. Several board members had concluded that the company’s direction under Altman was inconsistent with the mission. The firing was an exercise of governance power consistent with the structure.

What the firing did not adequately account for was the practical reality of the company. The vast majority of employees were aligned with Altman. The capital structure was overwhelmingly Microsoft. The product was the most-discussed AI system in the world, with revenue and user growth that could not be sustained without operational continuity. The governance theory that had supported the firing was a poor match for the organisational reality.

The reinstatement happened because the alternatives were untenable. OpenAI without Altman would lose most of its workforce to Microsoft. The investor stake would be impaired. The product would be disrupted. The governance authority of the original board had been theoretical until exercised, and exercising it had revealed how thin the actual authority was.

The reformed board, the structural changes that followed, and the subsequent governance discussions across the AI industry all flowed from this event. The episode demonstrated, more clearly than years of regulatory hearings had, that mission-oriented corporate governance for AI companies operating at scale was much harder to maintain than its designers had assumed.

Found this useful?

Share it with someone who'd enjoy it.