A former Facebook executive, an AI researcher, a tech entrepreneur and a computer scientist are the four OpenAI kingmakers who plunged the start-up into crisis last week when they fired its chief executive.
The abrupt decision by board members Adam D’Angelo, Helen Toner, Tasha McCauley and Ilya Sutskever to oust Sam Altman set off a dramatic chain of events and has fuelled speculation about their motives and competency to manage what has become the world’s most high-profile AI start-up.
OpenAI is unconventionally structured as a partnership between a research non-profit and for-profit subsidiary. The board oversees both, but its core mandate is to pursue artificial intelligence “that is safe and benefits all of humanity” rather than to look after the interests of investors.
How the four came to hold the keys to the future direction of the leading AI company remains unclear. Neither investors nor staff could explain how the slimmed-down board, which is half the size it was in 2021, is appointed.
Among OpenAI employees, shock at Altman’s dramatic firing has curdled into frustration, with the board offering no specific reason for their decision beyond saying he had not been “consistently candid”.
Elon Musk, the outspoken X owner and former OpenAI board member who helped launch the start-up in 2015, has called on one of the four to “say something” to explain, while Vinod Khosla, an early investor, said the board had “set back the promise of artificial intelligence”, in an opinion piece in The Information on Monday.
Some people who know the board members said they were intelligent, thoughtful and well-placed to fulfil their mandate to serve humanity. Others have pointed to their relative lack of corporate experience, the poor handling of Friday’s announcement and the subsequent fallout.
One person who worked with D’Angelo at the Quora question-and-answer site he runs as chief executive said he was a poor communicator and that the board’s lack of communication was “not surprising”.
D’Angelo has expressed concerns in the past about the dangers of new technologies. When he joined the OpenAI board in 2018, he said work on AI “with safety in mind” was “both important and under-appreciated”.
Writing in 2017 while at Y Combinator, which invested in Quora, Altman said D’Angelo was “one of the few names that people consistently mention when discussing the smartest CEOs in Silicon Valley”, while Yishan Wong, the former chief of Reddit, said D’Angelo was “ridiculously rational”.
Jeffrey Ding, an AI researcher at George Washington University, said Toner, whom he has known since 2018, was clear-eyed about the risks and opportunities of generative AI.
She has “really good judgment” and the “rare ability to speak to both sides of debates about AI and AI governance”, he said, adding that Toner was “very clear-minded” and open to “new ideas, revising her opinions and being receptive to feedback”.
Toner and Ding co-authored a paper in June that said avoiding the regulation of AI because tighter rules would “let China pull ahead” was “not a good argument”.
Toner in May cautioned against over-relying on AI chatbots, saying there was “still a lot we don’t know” about them, and said in October that the US government should “take action to protect citizens from AI’s harms and risks, while also promoting innovation and capturing the technology’s benefits”.
Less is known about the low-profile McCauley, who, like Toner, is a supporter of effective altruism — an intellectual movement that has warned of the risks that AI could pose to humanity.
Toby Ord, who sits on the advisory board of the Centre for the Governance of AI research group alongside Toner and McCauley, said both were “highly intelligent, thoughtful and morally serious, with a deep knowledge about AI risk and governance”.
They were “exactly the kind of people one would want to have on the board of a non-profit tasked with the mission of supervising a for-profit subsidiary that is trying to develop artificial general intelligence”, he added.
McCauley was “one of the most thoughtful people I’ve ever worked with. Even during a crisis, she’s remarkably level-headed and calm”, said one person who has worked closely with her. “I find it very hard to picture her acting rashly or recklessly.”
Opinions are split on Sutskever, who is an OpenAI co-founder and co-author of the formative paper that launched the deep-learning era.
Critics rounded on the computer scientist for his role in the coup against Altman. Others pointed to Sutskever’s focus on AI safety as head of a team dedicated to controlling increasingly advanced AI. This had jarred with the culture of restless innovation embodied by Altman, according to people familiar with the matter.
“If you value intelligence above all other human qualities, you’re gonna have a bad time,” Sutskever wrote in an X post in October.
Musk wrote on X that Sutskever had “a good moral compass”, adding that he “would not take such drastic action unless he felt it was absolutely necessary”. Sutskever has since realigned himself with Altman, saying he “deeply [regretted] my participation in the board’s actions”.
Some major investors in the for-profit company have attempted to push the board into reinstating Altman, with legal action a possibility, according to multiple people with knowledge of the matter.
With Sutskever now behind Altman, they are considering which of the remaining three directors is most likely to flip. Multiple investors and staff have alighted on Quora founder D’Angelo as the most probable.
“Adam D’Angelo makes the most sense to go after first,” said one person at a venture fund invested in OpenAI. “He has a reputation in Silicon Valley, he doesn’t need this.”
Read the full article here