Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The world’s biggest artificial intelligence companies are pushing the UK government to speed up its safety tests for AI systems, in a clash over Britain’s desire to take a leading role in regulating the fast-developing technology.
OpenAI, Google DeepMind, Microsoft and Meta are among the tech groups that signed voluntary commitments in November to open up their latest generative AI models for review by Britain’s new AI Safety Institute. At the time, the companies pledged they would adjust their models if the institute found flaws in the technology.
According to multiple people familiar with the process, the AI companies are seeking clarity over the tests the AISI is conducting, how long they will take and what the feedback process is if any risks are found.
People close to the tech companies said they were not legally obliged to change or delay their product releases based on the outcomes of AISI’s safety tests.
However, a LinkedIn post from Ian Hogarth, chair of AISI, on Monday said: “Companies agreed that governments should test their models before they are released: the AI Safety Institute is putting that into practice.”
“Testing of models is already under way working closely with developers,” the UK government told the Financial Times. “We welcome ongoing access to the most capable AI models for pre-deployment testing — one of the key agreements companies signed up to at the AI Safety Summit,” which took place in November in Bletchley Park.
“We will share findings with developers as appropriate. However, where risks are found, we would expect them to take any relevant action ahead of launching.”
The dispute with tech companies reveals the limitations of relying on voluntary agreements to set the parameters of fast-paced tech development. On Tuesday, the government outlined the need for “future binding requirements” for leading AI developers to ensure they were accountable for keeping systems safe.
The government-backed AI safety institute is key to Prime Minister Rishi Sunak’s ambition for the UK to have a central role in tackling the existential risks stemming from the rise of AI, such as the technology’s use in damaging cyber attacks or designing bioweapons.
According to people with direct knowledge of the matter, the AISI has begun testing existing AI models and has access to yet unreleased models, including Google’s Gemini Ultra.
Testing has focused on the risks associated with the misuse of AI, including cyber security, leaning on expertise from the National Cyber Security Centre within Government Communications Headquarters (GCHQ), one person said.
Recently published government contracts show the AISI has spent £1mn procuring capabilities to test for “jailbreaking”, meaning the formulation of prompts to coax AI chatbots into bypassing their guardrails, and “spear-phishing”, when individuals and organisations are targeted, commonly via email, to steal sensitive information or spread malware.
Another contract relates to the development of “reverse engineering automation”, the process by which source code is broken down in an automated way to identify its functionality, structure and design.
“The UK AI Safety Institute has access to some of our most capable models for research and safety purposes to build expertise and capability for the long term,” Google DeepMind said.
“We value our collaboration with the institute and are actively working together to build more robust evaluations for AI models, as well as seek consensus on best practices as the sector advances.”
OpenAI and Meta declined to comment.
Read the full article here