Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The UK is planning to announce an international advisory group on artificial intelligence at a summit next month, as ministers seek to carve out a global approach to tackling the risks associated with the technology.
People briefed on the government’s thinking said Britain was aiming to launch an international group at the UK AI summit that would advance knowledge of the technology’s capabilities and risks.
The new group would be loosely modelled on the UN Intergovernmental Panel on Climate Change, added the people.
Prime Minister Rishi Sunak hopes to position Britain as a leader in AI regulation, making it an integral part of the fast-evolving global technology sector, where the UK has been losing ground to the US and China.
One of the people familiar with the government’s plans said the advisory group would comprise a rotating cast of academics and geographical expert professionals who would likely write an annual report on cutting-edge developments in AI.
The group would be distinct from a planned UK AI safety institute, which would evaluate national security risks associated with machine learning models, and whose creation is expected to be announced in the coming weeks.
The government said discussions at the AI summit would “involve exploring a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks”.
Frontier AI is a sophisticated form of the technology that includes large language models, which can generate humanlike text, images and code.
Products created by big tech groups such as OpenAI’s ChatGPT and Google’s Bard incorporate this technology.
Writing in the Financial Times on Thursday, a group of US corporate AI leaders and policy experts, including former Google chief Eric Schmidt, called for an “independent, expert-led body” to be set up to oversee development of the technology, inspired by the IPCC.
A proposed International Panel on AI Safety would act as a repository for existing research and implementation, including technical specifications of AI models and who was deploying them and where, said Schmidt and Mustafa Suleyman, co-founder of Google-owned AI company DeepMind.
The IPAIS would also act as an “impartial” evaluator of AI risks and future trajectories, they added, saying it would not perform any fundamental research of its own but would act as a hub for existing expertise.
It is unclear whether the UK government would base its planned international advisory body on the idea of the IPAIS.
Deputy Prime Minister Oliver Dowden told the Financial Times last month that he was confident the government’s “frontier AI” task force, created in June with £100mn in funding, could evolve to become “a permanent institutional structure, with an international offer”.
The two-day UK AI summit, which Downing Street announced in June, begins on November 1 at Bletchley Park, the second world war codebreakers’ hub outside London.
Attendees at the event will primarily include AI company executives, researchers and government leaders.
On Wednesday, British and Chinese officials confirmed that China was planning to attend the summit, alongside the US, to participate in setting out an international approach to governance of the technology.
The UK has focused much of the summit’s agenda on AI’s potential threats to national security, such as cyber attacks or bad actors’ ability to use it to design bioweapons.
But government insiders said Sunak had asked cabinet ministers to try to talk up the potential benefits of AI in multiple areas, including health and defence.
Discussions at the summit will also touch on election disruption, the spread of misinformation, the future direction of AI development and opportunities for the technology across the economy.
Read the full article here