Step aside world, the US wants to write the AI rules

0 0

Stay informed with free updates

The writer is founder of Sifted, an FT-backed site about European start-ups

Kamala Harris scores high marks for honesty even if she may not graduate top in diplomacy. Standing alongside US president Joe Biden before he signed Monday’s executive order regulating artificial intelligence, the vice-president spelled out her country’s intent to remain the world’s technological hegemon and write its own rules of the game.

“Let us be clear: when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. It is America that can catalyse global action and build global consensus in a way that no other country can,” Harris said, pointedly. Then, she flew off to the UK government’s Bletchley Park summit on AI safety.

The executive order, which focuses on current harms such as privacy, security, discrimination and disinformation, involves more than 25 government agencies. It is the most comprehensive attempt to date to regulate the world’s biggest AI companies. It will prove far more consequential than the worthy but toothless Bletchley Declaration, agreed this week by 28 countries and the EU.

This executive order was “imperfect, but comes far closer to laying out real policy”, wrote Gary Marcus, chief executive of the Centre for the Advancement of Trustworthy AI.

America’s domination of AI research and development is hard to overstate. According to the 2023 State of AI report, the US produced more than 70 per cent of the most cited AI research papers over the past three years, followed by China and the UK. Led by Google, Meta and Microsoft, US-based companies and universities account for nine of the top 10 research institutions. The one exception, London-based DeepMind, was bought by Google in 2014.

The wealth, power and ambition of the three US tech giants remain unmatched. In 2022 the corporate trio generated revenues of almost $600bn; they have a combined market value of $5tn, more than double the value of the top 100 UK-listed companies ($2.4tn). The most significant generative AI start-ups are also based on the west coast, notably OpenAI, which launched ChatGPT last year, and Anthropic.

For the moment, there are few constraints on how they develop AI, other than their own codes of ethics. But Rishi Sunak, UK prime minister, said that international safety institutes would now test frontier models. The US confirmed it would emulate the British example by setting up its own safety institute.

Speaking at the US embassy in London, Harris, who has taken the lead in formulating the administration’s AI policy, explained how Washington would scrutinise the big companies far more intrusively.

“History has shown that in the absence of regulation and strong government oversight, some tech companies choose to prioritise profit over the wellbeing of their customers, the safety of their communities and the stability of our democracies,” she said.

Yet the administration will find it tough to turn its executive order into legislation, given the partisan divide in Washington. Meanwhile, China and the EU are adopting full-blown legislation of their own.

Even so, many participants hailed the Bletchley summit, with its focus on AI’s extreme risks, as a success. The UK government did a remarkable job of convening many of the world’s top researchers alongside political leaders. US and Chinese officials shared a stage.

And the summit’s main goal was to start a global conversation about frontier AI models: here, it certainly succeeded. Whereas no politician was talking much about AI a few years ago, it is now hard to shut them up on the subject. The UK is keen that the Bletchley summit leaves a lasting legacy. South Korea and France will conduct two further safety summits over the next year.

The biggest unanswered question from Bletchley, though, was whether profit-seeking companies are the best organisations to pursue artificial general intelligence, when computers may one day supersede human intelligence across every domain. To pursue that mission, some experts have called for a collaborative international research agency akin to Cern.

It was striking that Demis Hassabis, co-founder of DeepMind, expressed doubts about whether the Silicon Valley ethos, typified by the mantra “move fast and break things”, was the best approach. “AI is too important a technology, I would say, too transformative a technology to do it in that way,” he told the BBC. “We should be looking at the scientific method.”

The agenda for the next AI safety summit should start right there.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy