Mustafa Suleyman and Eric Schmidt: We need an AI equivalent of the IPCC

0 0

Stay informed with free updates

The writers are co-founder of Inflection & DeepMind and the former CEO of Google

AI is here. Now comes the hard part: learning how to manage and govern it. As large language models have exploded in popularity and capability over the past year, safety concerns have become dominant in political discussion. For the first time, artificial intelligence is top of the in-tray for policymakers the world over. 

Even for those of us working in the field, the rate of progress has been electrifying. And yet it’s been equally eye-opening to see the extraordinary public, business and now political response gathering pace. There’s a growing consensus this really is a turning point as consequential as the internet.

Clarity about what should be done about this burgeoning technology is a different matter. Actionable suggestions are in short supply. What’s more, national measures can only go so far given its inherently global nature. Calls to “just regulate” are as loud, and as simplistic, as calls to simply press on.

Before we charge head first into over-regulating we must first address lawmakers’ basic lack of understanding about what AI is, how fast it is developing and where the most significant risks lie. Before it can be properly managed, politicians (and the public) need to know what they are regulating, and why. Right now, confusion and uncertainty reign.

What’s missing is an independent, expert-led body empowered to objectively inform governments about the current state of AI capabilities and make evidence-based predictions about what’s coming. Policymakers are looking for impartial, technically reliable and timely assessments about its speed of progress and impact.

We believe the right approach here is to take inspiration from the Intergovernmental Panel on Climate Change (IPCC). Its mandate is to provide policymakers with “regular assessments of the scientific basis of climate change, its impacts and future risks, and options for adaptation and mitigation”.

A body that does the same for AI, one rigorously focused on a science-led collection of data, would provide not just a long-term monitoring and early-warning function, but would shape the protocols and norms about reporting on AI in a consistent, global fashion. What models are out there? What can they do? What are their technical specifications? Their risks? Where might they be in three years? What is being deployed where and by whom? What does the latest R&D say about the future?

The UK’s forthcoming AI safety summit will be a first-of-its-kind gathering of global leaders convening to discuss the technology’s safety. To support the discussions and to build towards a practical outcome, we propose an International Panel on AI Safety (IPAIS), an IPCC for AI. This necessary, measured and above all achievable next step will provide much-needed structure to today’s AI safety debate.

The IPAIS would regularly and impartially evaluate the state of AI, its risks, potential impacts and estimated timelines. It would keep tabs on both technical and policy solutions to alleviate risks and enhance outcomes. Significantly, the IPCC doesn’t do its own fundamental research, but acts as a central hub that gathers the science on climate change, crystallising what the world does and doesn’t know in authoritative and independent form. An IPAIS would work in the same way, staffed and led by computer scientists and researchers rather than political appointees or diplomats.

This is what makes it such a promising model — by staying out of primary research or policy proposals, it can avoid the conflicts of interest that inevitably come with a more active role. With a scope narrowly focused on establishing a deep technical understanding of current capabilities and their improvement trajectories, it would be cheap to run, impartial and independent, built on a broad international membership.

Given that much of the most cutting-edge work in AI is undertaken by businesses, ensuring sufficient transparency from leading companies is essential. An IPAIS will help here even before legal mechanisms come into play, establishing a trusted body to report into, creating expectations and standards around sharing to provide space for maximal openness in a tight commercial market. Where full access isn’t possible, it would still aggregate all publicly available information in the most comprehensive and reliable form.

Trust, knowledge, expertise, impartiality. These are what effective, sensible AI regulation and safety will be built on. Currently they are lacking. We believe that establishing an independent, scientific consensus about what capabilities have been developed, and what’s coming, is essential in developing safe AI. This is an idea whose time has come.

*This proposal has been developed jointly by Mustafa Suleyman, Eric Schmidt, Dario Amodei, Ian Bremmer, Tino Cuéllar, Reid Hoffman, Jason Matheny and Philip Zelikow

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy