We need a political Alan Turing to design AI safeguards

0 0

Receive free Artificial intelligence updates

The writer is founder of Sifted, an FT-backed site about European start-ups

When Alan Turing worked at Bletchley Park during the second world war, he helped solve a diabolical puzzle: cracking Nazi Germany’s “unbreakable” Enigma code. Next month, the British government is hosting an international conference at the same Buckinghamshire country house to explore a similarly mind-bending problem: minimising the potentially catastrophic risks of artificial intelligence. Even a mathematician as ingenious as Turing, though, would be tested by that challenge. 

Whereas the electro-mechanical device that Turing built could perform just one code-cracking function well, today’s frontier AI models are approaching the “universal” computers he could only imagine, capable of vastly more functions. The dilemma is that the same technology that can boost economic productivity and scientific research can also escalate cyberwarfare and bioterrorism.

As has become clear from the ferocious public debate that has erupted since OpenAI’s release of its ChatGPT chatbot last November, the spectrum of concerns aroused by AI is expanding fast.

At one end, “safety” champions extrapolate from the recent advances in AI technology and focus on extreme risks. An open letter signed earlier this year by dozens of the world’s leading AI researchers — including the chief executives of OpenAI, Anthropic and Google DeepMind, which are developing the most powerful models — even declared: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

At the other end, “ethics” advocates are agitated by the here-and-now concerns of algorithmic bias, discrimination, disinformation, copyright, workers’ rights and the concentration of corporate power. Some researchers, such as Emily Bender, a professor at the University of Washington, argue that the debate over the existential risks of AI is science fiction fantasy designed to distract from today’s concerns.

Several civil society groups and smaller tech companies, who feel excluded from Bletchley Park’s official proceedings, are organising fringe events to discuss the issues they think are being ignored.

Matt Clifford, the British tech investor who is helping to set the agenda for the AI safety summit, accepts that it will only address one set of concerns. But he argues that other forums and institutions are already grappling with many other issues. “We’ve chosen a narrow focus, not because we don’t care about all the other things, but because it’s the bit that feels urgent, important and neglected,” he tells me.

In particular, he says the conference will explore the possibilities and dangers of next-generation frontier models, likely to be released within the next 18 months. Even the creators of these models struggle to predict their capabilities. But they are certain they will be significantly more powerful than those of today and, by default, available to many millions of people.

As Dario Amodei, chief executive of Anthropic, outlined in chilling testimony to the US Congress in July, the development of more powerful AI models might revolutionise scientific discovery but it would also “greatly widen the set of people who can wreak havoc”. Without appropriate guardrails, there might be a substantial risk of a “large-scale biological attack”, he said.

Although the industry is resistant, it is hard to escape the conclusion that the precautionary principle must now apply to frontier AI models given the unknowability of their capabilities and the speed at which they are being developed. That is the view of Yoshua Bengio, a pioneering AI researcher and winner of the Turing award for computer science, who is attending the Bletchley Park conference. 

Bengio suggests that frontier AI models could be regulated in the same way that the US Food and Drug Administration controls the release of drugs to stop garbage cures being sold. That may slow the pace of innovation and cost the tech companies more money but, “that’s the price of safety and we should not hesitate to do it,” he says in an interview for the FT’s forthcoming Tech Tonic podcast series.

It is commendable that the British government is starting a global conversation on AI safety and is itself building expert state capacity to grapple with frontier models. But Bletchley Park will mean little unless it leads to meaningful co-ordinated action. And in a world distracted by so many dangers, that will require a political, rather than a technological, Turing to crack the code. 

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy