How OpenAI’s Bizarre Structure Gave 4 People the Power to Fire Sam Altman

How OpenAI’s Bizarre Structure Gave 4 People the Power to Fire Sam Altman

When Sam Altman, Elon Musk, and other investors formed the startup behind ChatGPT as a US not-for-profit organization in 2015, Altman told Vanity Fair he had very little experience with nonprofits. “So I’m just not sure how it’s going to go,” he said.

He couldn’t have imagined the drama of this week, with four directors on OpenAI’s nonprofit board unexpectedly firing him as CEO and removing the company’s president as chairman of the board. But the bylaws Altman and his cofounders initially established and a restructuring in 2019 that opened the door to billions of dollars in investment from Microsoft gave a handful of people with no financial stake in the company the power to upend the project on a whim.

An attempt to restore Altman as CEO and replace the board ran into difficulty Sunday over the role of existing directors in choosing their replacements, Bloomberg reported.

Altman’s firing caught investors off guard, including prominent firms such as Khosla Ventures, which has a significant stake in OpenAI, as well as Andreessen Horowitz and Sequoia Capital, which have smaller slices of shares, according to two people familiar with the matter not authorized to speak to media about the startup. Spokespeople for Khosla, Sequoia, and Andreessen declined to comment.

One of the sources says some investors had previously feared OpenAI’s remaining independent directors—with little background in corporate governance—could end up failing in their oversight duties. Less thought was given to the possibility of aggressive action like that taken against Altman. “I never expected them to be activists,” the person says.

The 11-page bylaws OpenAI Inc. established in January 2016 give board members the exclusive right to elect and remove fellow directors and also to determine the board’s size. The rules also say that a majority of the board can take any action without prior notice or a formal meeting, as long as a majority of board members provide written consent.

Nathan Benaich, general partner of Air Street Capital and coauthor of the “State of AI Report,” says OpenAI’s corporate structure has proven to be at odds with the need to support cutting-edge research through huge amounts of equity investment. “It was an experiment to defy the laws of corporate physics, and it appears that physics won out,” he says.

It’s unclear if the OpenAI nonprofit has tweaked those bylaws since the initial filing on the California Department of Justice’s Registry of Charitable Trusts. The nonprofit reported no “significant changes to its governing documents” in filings to US tax authorities through 2021, the last year for which data could be obtained.

But the original rules and the gradual dwindling of board members, due to conflicts of interest and a dispute with Musk, could help explain how a small group could sack Altman without chairman Greg Brockman’s involvement, and eject both Brockman and Altman from the panel. OpenAI, the California agency, Altman, Brockman, and the four remaining directors did not respond to requests for comment.

OpenAI had warned investors on the cover sheet of key documents that “it would be wise to view any investment in OpenAI in the spirit of a donation,” according to a federal tax filing.

Utopian Quest

OpenAI was formed as a nonprofit in line with the project’s mission to develop artificial intelligence that would be safe and beneficial for humanity and provide a counterweight to profit-driven AI labs at giants like Google.

Altman and Musk were the sole initial board members, according to the Vanity Fair interview. “We realize the two of us aren’t a perfect sample of what everyone in the world thinks is important,” Altman said. “I’d say we plan to expand that group.”

OpenAI did expand its board, but the new members were generally also white men from the slice of Silicon Valley concerned that future, ultra-powerful AI could turn against humanity. By 2017, executives involved from the earliest days, including Brockman, research chief Ilya Sutskever, and then COO Chris Clark, had joined the board, according to the nonprofit’s federal tax filings.

Also on the board was Holden Karnofsky, founder of Open Philanthropy, an effective-altruism group that donated to OpenAI. LinkedIn cofounder and venture capitalist Reid Hoffman, one of the project’s initial backers, joined in 2018.

A feud over the direction of OpenAI led Musk to step down from the board in 2018 after failing to take over the project. The following year, OpenAI formed a for-profit subsidiary to lure the funding and employees needed to pursue its leaders’ ambitious and expensive AI development plans.

Venture capitalists and employees could now get some return on the money or sweat that they invested in the company—but the nonprofit’s board still maintained ultimate say over the for-profit business through several new legal provisions, according to OpenAI.

The directors’ primary fiduciary duty remained to uphold its mission of safe development of artificial general intelligence beneficial to all of humanity. Only a minority of directors could have financial stakes in the for-profit company, and the for-profit company’s founding documents require that it give priority to public benefits over maximizing profits.

The revised structure unlocked a torrent of funding to OpenAI, in particular from Microsoft, ultimately allowing OpenAI to marshal the cloud computing power needed to create ChatGPT.

Among the new board crew helming this unique structure were Shivon Zilis, a longtime associate of Elon Musk and later mother of twins with the entrepreneur, who joined in 2019 after serving as adviser. Will Hurd, a former Republican congressman, signed up in 2021.

Concentration of Power

In 2023, OpenAI’s board started to shrink, narrowing its bench of experience and setting up the conditions for Altman’s ouster. Hoffman left in January, according to his LinkedIn profile, and he later cited potential conflicts of interest with other AI investments. Zilis resigned in March, and Hurd in July to focus on an unsuccessful run for US president.

Those departures shrank OpenAI’s board to just six directors, one less than the maximum allowed in its original bylaws. With Brockman, Sutskever, and Altman still members of the group, it was evenly split among executives and people from outside of OpenAI—no longer majority independent, as Altman weeks earlier had testified to US senators.

The dramatic turn came Friday when, according to Brockman, chief scientist Sutskever informed him and Altman about their removals from the board shortly before a public announcement of the changes, which also included Altman’s firing as CEO because “he was not consistently candid in his communications with the board.” Brockman subsequently resigned from his role as OpenAI’s president. Sutskever reportedly had been concerned about his diminished role inside OpenAI and Altman’s fast-paced commercialization of its technologies.

The leadership upheaval threw OpenAI into crisis, but arguably the board functioned as intended—as an entity independent of the for-profit company and empowered to act as it sees necessary to accomplish the project’s overall mission. Sutskever and the three independent directors would form the majority needed to make changes without notice under the initial bylaws. Those rules allow for removals of any director, including the chair, at any time by fellow directors with or without cause.

Beside Sutskever, the remaining directors include Adam D’Angelo, an early Facebook employee who has served since 2018 and is CEO of Q&A forum Quora, which licenses technology from OpenAI and AI rivals; entrepreneur Tasha McCauley, who took her seat in 2018; and Helen Toner, an AI safety researcher at Georgetown University who joined the board in 2021. Toner previously worked at the effective-altruism group Open Philanthropy, and McCauley is on the UK board of Effective Ventures, another effective-altruism-focused group.

During an interview in July for WIRED October cover story on OpenAI, D’Angelo said that he had joined and remained on the board to help steer the development of artificial general intelligence toward “better outcomes.” He described the for-profit entity as a benefit to the nonprofit’s mission, not one in contention with it. “Having to actually make the economics work, I think is a good force on an organization,” D’Angelo said.

The drama of the past few days has led OpenAI leaders, staff, and investors to question the governance structure of the project.

Amending the rules of OpenAI’s board isn’t easy—the initial bylaws place the power to do so exclusively in the hands of a board majority. As OpenAI investors encourage the board to bring Altman back, he has reportedly said he would not return without changes to the governance structure he helped create. That would require the board to reach a consensus with the man it just fired.

OpenAI’s structure, once celebrated for charting a brave course, is now drawing condemnation across Silicon Valley. Marissa Mayer, previously a Google executive and later Yahoo CEO, dissected OpenAI’s governance in a series of posts on X. The seats that went vacant this year should have been filled quickly, she said. “Most companies of OpenAI’s size and consequence have boards of 8-15 directors, most of whom are independent and all of whom has more board experience at this scale than the 4 independent directors at OpenAI,” she wrote. “AI is too important to get this wrong.”

Anthropic, a rival AI firm founded in 2021 by ex-OpenAI employees, has undertaken its own experiment in devising a corporate structure to keep future AI on the rails. It was founded as a public-benefit corporation legally pledged to prioritize helping humanity alongside maximizing profit. Its board is overseen by a trust with five independent trustees chosen for experience beyond business and AI, who will ultimately have the power to select a majority of Anthropic’s board seats.

Anthropic’s announcement of that structure says it consulted with corporate experts and tried to identify potential weaknesses but acknowledged that novel corporate structures will be judged by their results. “We’re not yet ready to hold this out as an example to emulate; we are empiricists and want to see how it works,” the company’s announcement said. OpenAI is now scrambling to reset its own experiment in designing corporate governance resilient to both superintelligent AI and ordinary human squabbles.

Additional reporting by Will Knight and Steven Levy.

Updated 11-19-2023, 5:30 pm EST: This article was updated with a past comment by Adam D’Angelo.

https://www.wired.com/feed/rss

Paresh Dave

Leave a Reply