“OpenAI is nothing without its people.” That was the sentence echoed by dozens of employees on social media in November to pressure the board that had fired chief executive Sam Altman and convince them to reinstate him.
Those words were repeated again on Wednesday as its high-profile chief technology officer, Mira Murati, announced her departure, along with two others: Bob McGrew, chief research officer, and Barret Zoph, vice-president of research.
Murati’s decision shocked staff and pointed to a new direction for the nine-year-old company that has pivoted from a scrappy AI research organisation to a commercial behemoth. Altman was only notified in the morning, just hours before Murati sent a message company-wide.
Altman said on X that he “won’t pretend it’s natural for this . . . to be so abrupt”, as the exits made it apparent that the company had not healed from the fractures caused by the failed autumn coup.
In the months after the bruising board battle, Altman has surrounded himself with allies as the fast-growing start-up pushes ahead with plans to restructure as a for-profit company.
It also emerged this week that Altman had discussed taking an equity stake with the board, at a time the San Francisco-based company has sought to raise more than $6bn at a $150bn valuation.
Those talks come after Altman, who is already a billionaire from his previous tech ventures and investments, had previously said he had chosen not to take any equity in OpenAI to remain neutral in the company.
This account of how Altman consolidated his power and loyalties at the ChatGPT maker is based on conversations with seven former and current employees, as well as advisers and executives close to the company’s leadership.
They said that OpenAI planned to rely on existing technical talent and recent hires to take on Murati’s responsibilities and use her exit to “flatten” the organisation. Altman is to have greater technical involvement as the company looks to retain its lead over Google and other rivals.
Despite its dramas, OpenAI is still a leading player in AI, with the start-up revealing the o1 model earlier this month, which it said was capable of reasoning — a feat that its rivals Meta and Anthropic are also grappling with.
“Mira is focused on a successful transition with her teams before turning her full energy and attention to what comes next,” said a person familiar with her thinking.
With Murati’s departure, Altman promoted Mark Chen to head up research with Jakub Pachocki, who took over from Ilya Sutskever as chief scientist in May.
In an interview with the Financial Times earlier this month, where Murati introduced Chen as the primary lead on the o1 project, he said the ability of AI systems to reason would “improve our offerings [and] help power improvements across all of our programs”.
There will probably be further changes in the coming days as Altman interrupts a trip to Europe this week to return to the company’s headquarters in San Francisco.
Executives who remain at OpenAI include Brad Lightcap, the company’s chief operating officer who leads on its enterprise deals, and Jason Kwon, chief strategy officer, both of whom are longtime allies of Altman and worked at start-up incubator Y Combinator under Altman.
In June, Altman hired Kevin Weil, chief product officer, who previously worked at Twitter, Instagram and Facebook, and Sarah Friar, chief financial officer, the former chief executive of Nextdoor, a neighbourhood-based social network. Both come from consumer tech companies, focusing on product and user growth rather than science or engineering.
Their jobs are new for OpenAI, but familiar to most Silicon Valley start-ups, marking the move by the company to become a more traditional tech group focused on building products that appeal to consumers and generate revenue. OpenAI said these efforts are not at odds with ensuring AI benefits everyone.
“As we’ve evolved from a research lab into a global company delivering advanced AI research to hundreds of millions, we’ve stayed true to our mission and are proud to release the most capable and safest models in the industry to help people solve hard problems,” an OpenAI spokesperson said.
Friar sought to boost morale this week, telling staff that the $6bn funding round, which was expected to close by next week, was oversubscribed, arguing its high value was a testament to their hard work.
Another prominent newcomer is Chris Lehane, a former aide to then-US president Bill Clinton and Airbnb vice-president, who worked for Altman as an adviser during the coup and joined the company earlier this year. He recently took over as vice-president of global affairs from Anna Makanju, OpenAI’s first policy hire, who has moved into a newly created role as vice-president of global impact.
With the latest departures, Altman has said goodbye to two of the senior executives who had raised concerns about him to the board last October — Sutskever and Murati, who said she was approached by the board and perplexed by the decision to oust him.
Concerns included Altman’s leadership style of undermining and pitting people against one another, creating a toxic environment, multiple people with knowledge of the decision to fire him said.
Within a day, as investors and employees backed Altman, Murati and Sutskever joined calls for his return and remained at the company, wishing to steady the ship and keep it sailing towards the mission: building artificial general intelligence — systems that could rival or surpass human intelligence — to benefit humanity.
This was the mantra under which OpenAI was founded in 2015 by Elon Musk, Altman and nine others. It was initially a non-profit, then transitioned to a capped-profit entity in 2019.
Now, as it seeks to close its latest multibillion-dollar funding round, the company is rethinking its corporate structure in order to attract investors and generate greater returns. Only two co-founders, Altman and Wojciech Zaremba, remain at the company. President Greg Brockman is on sabbatical until the end of the year.
For many of OpenAI’s staff there is a desire to work on AGI and reach that goal before competitors such as Meta or Musk’s xAI. They buy into the so-called cult of Sam and believe he will lead them to this breakthrough. Yet, several staff have expressed concern about reaching this goal, suggesting the creation of products being prioritised over safety.
Daniel Kokotajlo, a former AI governance researcher, said that when he left the company in March, the closest OpenAI had come to a plan for how to keep AGI safe was the final appendix on a December paper written by Jan Leike, a safety researcher, alongside Sutskever.
“You might expect a company of more than 1,000 people building this to have a comprehensive written plan for how to ensure AGI is safe, which would be published so it could be critiqued and iterated,” he said. “OpenAI knows any such detail would not stand up to scrutiny, but this is the bare minimum that is acceptable for an institution building the most powerful and dangerous technology ever.”
OpenAI pointed to its preparedness framework as an example of its transparency and planning, adding that the technology could also bring many positives.
“OpenAI continues to invest significantly in safety research, security measures, and third-party collaborations, and we will continue to oversee and assess their efforts,” said Zico Kolter and Paul Nakasone, members of the independent board’s Safety and Security oversight committee.
Read the full article here