Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
The UK government will provide businesses with a new platform to help assess and mitigate the risks posed by artificial intelligence, as it seeks to be the global leader in testing the safety of the novel technology.
The platform, launched on Wednesday, will bring together guidance and practical resources for businesses to use to carry out impact assessments and evaluations of new AI technologies, and review the data underpinning machine learning algorithms to check for bias.
Science and tech secretary Peter Kyle said these resources would give “businesses the support and clarity they need to use AI safely and responsibly while also making the UK a true hub of AI assurance expertise”.
The minister was speaking ahead of the Financial Times’ Future of AI Summit on Wednesday, where he will outline his vision for the AI sector in Britain.
Kyle has previously vowed to place AI at the heart of the government’s growth agenda, and argued that if it were fully integrated into the economy it would increase productivity by 5 per cent and create £28bn of fiscal headroom.
His government sees AI safety — including so-called assurance technology — as an area where the UK could carve out a competitive niche, building on the expertise from Britain’s pioneering AI Safety Institute launched by former conservative prime minister Rishi Sunak.
Assurance technologies, akin to cyber security for the web, are essentially tools that can help businesses verify, scrutinise and trust the machine learning products they are working with. Companies already producing this technology in the UK include Holistic AI, Enzai and Advai.
The new Labour government believes this market could grow six-fold in Britain to be valued at £6.5bn by 2035.
However, the UK faces stiff competition from around the world in developing assurance technology with other nations also seeking to lead the way on AI safety.
The US launched its own AI safety institute last year, while the EU has enacted an AI Act that is considered among the toughest regulatory regimes for the new technology.
As part of the new platform, the UK government will be rolling out a self-assessment tool to help small businesses check whether they are using AI systems safely.
It is also announcing a new partnership on AI safety with Singapore that will allow the safety institutes from both countries to work closely together to conduct research, develop standards and industry guidance.
Dominic Hallas, executive director of The Startup Coalition, said there “definitely is a huge opportunity” in the UK market for AI assurance technologies, adding that “the biggest gap to adoption of AI at the moment is trust of the models”.
He noted, however, that many AI start-ups still face huge challenges around how to access enough compute power, and how to attract talent — areas where greater investment and interventions from government would be welcome.
Earlier this year, a report by the Social Market Foundation think-tank recommended that the UK government mobilise the public and private sector to “supercharge” the UK’s AI assurance tech industry.
It said that the global AI Assurance Tech market was estimated to reach $276bn by 2030, and argued that the UK could become a global leader. It also called on the government to invest up to £60mn in companies developing these technologies.
Read the full article here