Book award winner Parmy Olson says it is not too late to control AI

0 1

Stay informed with free updates

“While [Sam] Altman measured success with numbers, whether for investments or people using a product, [Demis] Hassabis chased awards,” writes Parmy Olson in her book Supremacy, about the co-founders of DeepMind and OpenAI. “[Hassabis] often told staff that he wanted DeepMind to win between three and five Nobel Prizes over the next decade.”

Just a few hours after Olson won the 2024 Financial Times and Schroders Business Book Award for Supremacy this week, Hassabis rendered the first edition out of date by accepting the Nobel Prize for chemistry in Stockholm for his work on an AI system that can predict the structure of all known proteins.

That Hassabis is already one Nobel closer to his ambitious target illustrates the speed at which AI is transforming the world. Olson’s difficult task was to produce a book on this fast-moving technology that would stand the test of time. In an interview this week, she says she wanted to describe “the battle for control [of AI], and also . . . for proper oversight”. FT editor Roula Khalaf, chair of the book award judges, says Olson “brilliantly frames the development of artificial intelligence as a thrilling race” between the spiritually minded, games-obsessive Hassabis and the number-crunching Altman.

Despite Hassabis and Altman’s differences, Olson says she was also intrigued by their similarities. Both believed AGI — the point at which AI surpasses humans’ cognitive abilities — “would solve many of our current social ills and problems”. Both shared concerns about lack of regulation and an excess of corporate control of AI. “Both tried to put in governance structures to separate the technology a little bit and give it proper oversight,” she says, “and both of them failed to do it.”

Google now controls DeepMind, while Microsoft is backing OpenAI, whose public launch of ChatGPT two years ago accelerated usage of generative AI. The founders’ “quite utopian, almost humanitarian ideals kind of faded into the background as they aligned themselves more and more with two very large technology companies”, says Olson. She intended Supremacy in part as a warning about the need for proper regulation of the emerging tech oligopoly.

But is it too late to impose regulatory and ethical restraints on AI, given how rapidly it is evolving? Olson, a technology columnist at Bloomberg, thinks not. There is still time to influence “how tech companies design their algorithms”, she says, to ensure they are safer and less biased.

Legislation, such as the EU’s AI Act, which lays out a tough regulatory regime, will impose guardrails. But Olson also notes that companies procuring generative AI systems will exercise restraint on the technology suppliers. “There’s a lot of experimentation happening, but actually not that much spending on putting these AI systems into practice, because there really is concern about hallucinations . . . and bias,” she says. “Companies like banks and healthcare systems have their own regulatory regimes [that] they need to make sure they’re following” before rolling AI-fuelled products and services out to customers.

When this year’s book award launched, previous winners were asked what they would add to their books if they had the chance to write a new chapter. Supremacy was published only in September but Olson recognises future editions may have to acknowledge the importance of Donald Trump’s election last month, and the proximity of entrepreneur Elon Musk to the US president-elect.

Musk was a co-founder and early backer of OpenAI with Altman, before splitting and founding his own start-up xAI, which is training a rival to ChatGPT and Google’s Gemini called Grok. Olson says she is “honestly surprised at how quickly Grok has grown in terms of its ability to raise money”. However, she cautions that Musk’s presence in a Trump administration, far from accelerating a light-touch regulatory regime for AI, could “throw a spanner in the works”. “You have to remember that Musk is an AI doomer, and he started OpenAI in part because he was so worried about Google having control of AGI”.

Olson’s concern about the weaponisation of AI purely for commercial returns is clear. Some AI chatbot start-ups are already encouraging an emotional connection between users and bots. She worries about advertising-backed models that could fuel an addictive cycle of chatbot use. It will “become harder for the vendor to think about the wellbeing of the user because their business model depends on that person being on their app for as long as possible”.

Yet Olson says she is not predicting an AI dystopia. When the FT prompted ChatGPT to generate an “unexpected question” for her, the bot asked: “If AI were to write a definitive history of humanity 100 years from now, what perspective or biases might it have? And how would this history differ from one written by humans today?”

Olson responds: “If humans really make an effort”, and a more equitable society evolves, “maybe whatever AI writes . . . will be a reflection of that. I do look at the future with optimism, so I would hope that it would actually be quite an inspiring read.”

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy