Sunak’s AI summit scores ‘diplomatic coup’ but exposes global tensions

0 18

At Rishi Sunak’s summit on artificial intelligence this week, one delegate said the British Prime Minister had scored a “diplomatic coup” by getting US and Chinese officials to stand together on the need to control AI.

Amid heightened trade and technological tensions between Washington and Beijing, the Chinese delegate Dr Wu Zhaohui surprised some at the two-day event at Bletchley Park, England by agreeing that they were united by the common values of “democracy” and “freedom” in the fight against the malicious uses of AI. 

Broad commitments from 28 nations to work together to tackle the existential risks stemming from advanced AI — as well as attracting leading tech figures from Tesla and X chief Elon Musk to OpenAI’s Sam Altman — led many to say that Sunak’s AI summit had been a success.

“It’s a great example of the convening power of the UK, because you got all of these people under one roof,” said Jean Carberry, assistant secretary for digital policy from the Irish Department of Enterprise, Trade and Employment. “It has been quite catalytic.”

But the Bletchley Park summit has also exposed underlying tensions about the development of AI. US vice-president Kamala Harris held a press conference in London during the summit, where she spelt out her country’s intent to write its own rules of the game.

“Let us be clear: when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way that no other country can,” Harris said, pointedly. 

Earlier in the week, president Joe Biden issued an executive order compelling AI companies whose models could threaten US national security to share how they are ensuring the safety of their tools.

At Bletchley, the US also announced a plan to create its own AI Safety Institute staffed by experts to monitor risks, which would work closely with a similar body created by the UK that has ambitions for a greater global role in researching risks related to AI.

One politician present said that the US land grab was “inevitable”. “AI is the next big thing, and no one is going to roll over and play dead and say you take over,” they said. “China’s not going to let that happen, and certainly the US won’t.”

The person added that one of the reasons the Chinese had been so pliant in the development of a joint position on global AI governance was that “playing nice” and acting as a “responsible partner” could help foster conversations about the relaxation of trade barriers imposed by the US later down the line.

Another source of debate was over whether sophisticated AI models be allowed to be “open” or “closed”. Some companies such as Meta and start-ups like Hugging Face, Mistral and Stability AI are building AI systems that are open-source, by which they mean technical details of the new model will be released publicly.

This contrasts with the approach of competitors such as Microsoft-backed OpenAI and Google, which create a so-called black box in which the data and code used to build AI models are not available to third parties.

Advocates for closed models say they can better comply with strict regulations, control who has access to them and prevent them falling into the hands of malicious actors. But others argue that open models will help ensure less well-resourced countries and academic groups can develop their own AI systems.

“In these discussions, there are conflicting goals. And I see countries . . . who want to make sure that they can catch up to the state of the art . . . saying things like, ‘we should share everything’,” said Yoshua Bengio, scientific director of the Montreal Institute for Learning Algorithms and a pioneer of deep learning, the technology behind today’s sophisticated AI models.

But he added: “Open-sourcing the strongest models is . . . dangerous from a security perspective. Once you can have access to the weights in the system, you can easily kind of shift them into something malicious.”

Emad Mostaque, founder and chief executive of image AI company Stability AI, which runs on open-source technology, said France had been particularly vocal in its support of increasing access to sophisticated AI models, partly because of its homegrown AI start-up Mistral, which is building open-source software.

“But ultimately just about every government official I talk to realises that governments have to run on open. In regulated industries you can’t have black boxes,” he said. “There is this question of national security — can you really rely on someone else to do this technology?” 

Musk, who has created his own company X.AI and who has previously advocated for a moratorium on developing more advanced AI systems, told Sunak that he had “a slight bias towards open source” because it tends to lag closed-source by six to 12 months and “at least you can see what’s going on”.

“This focus on a pause and the catastrophic risks of AI suits Musk because he is behind [on developing AI],” said one person who attended a session at the summit with the billionaire. They added that when “some of the leading companies are emphasising catastrophic outcomes there are risks of regulatory capture”, where lawmakers are influenced by commercial interests above concerns of the wider public.

Developing nations, having been excluded from the bounties of previous technological revolutions, sought commitments that this would not happen again.

“A lot of the countries, especially in the global south, find it unaffordable to participate in the digital economy of the world,” said Rajeev Chandrasekhar, Indian minister of electronics and IT. “That time is over. The benefits of tech should be available to . . . every country in the world.”

Vidushi Marda, a co-founder of Indian non-profit REAL ML, who attended, said “it was encouraging to see that the focus on regulation at the domestic level first was an emerging key priority,” to avoid repeating historical mistakes.

The next editions of the summit will be held in South Korea in six months, followed by France in a year’s time. While Bletchley’s event has focused on alignment around what needs to be done, participants said the next two would focus on action: concrete regulation and research proposals for how AI models could be evaluated.  

Marija Pejčinović Burić, secretary-general of the Council of Europe, said: “The challenge of AI is so big and the development is exponential, which means there is no luxury to put it aside and focus on something else.” 

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy