The route to AI regulation is fraught but it’s the only way to avoid harm

0 2

Stay informed with free updates

The writer is international policy director at Stanford University’s Cyber Policy Center and special adviser to the European Commission

The Open AI saga made it abundantly clear that democratic institutions ought to govern powerful AI systems. But contrary to some superficial statements that suggest one can be pro or anti regulation, the devil is in the details. There are battles over the direction AI regulation should take and what trade-offs are acceptable. We see the political discussion over choices of what to include or exclude in the EU AI Act unfolding as we speak. 

This week, EU negotiators will hammer out the details in what is scheduled to be the final round of negotiations on the landmark Act governing AI. Yet the EU’s version is the first among democracies to adopt comprehensive legislation. But despite initial political alignment around the Act, the entente seems to be unravelling at the last minute. 

Originally, the Act was designed to ensure proportionate risk mitigation over a range of AI functions. For instance, a company offering an AI service to screen job or university applicants would have to take steps to prevent their systems from unduly hurting individuals’ access. Facial recognition systems would be subject to more stringent checks because of privacy protection, and some of the most powerful AI systems such as social credit scoring would be banned altogether.

But when ChatGPT was released and public understanding of general-purpose AI developed, calls grew to regulate this through the AI Act as well. Foundation models are the technology that can both perform standalone tasks and serve as the basis for many other applications of AI. They can be used for generating text, images and sound or be trained to perform anything from facial recognition to content moderation. Given this boundless potential, leaving foundation models out of the law would be a missed opportunity to prevent harm. The White House Executive Order on AI, as well as the G7 Code of Conduct, address foundation models as well.

However, France, Italy and Germany are pushing back on this idea: they worry that if foundation models are regulated, their nascent domestic AI companies will never be able to catch up with US giants. Given their far-reaching impact, it is understandable that countries such as Germany and France want to develop sovereign AI capabilities — but they are still wrong to oppose regulating them.

Imagine how an undetected flaw in a foundation model can ripple to thousands of downstream users who are building apps with the model. Is it fair to hold one business to account for using the product of another? No wonder SME associations are worried about the compliance cost for the smallest actors in the value chain. 

For the most powerful to be the most responsible makes better sense. If companies like Open AI, Anthropic and Google DeepMind are left to their own devices, market concentration, privacy erosion and safety concerns will not independently be scrutinised. Why not tackle the problem at its root and build an ecosystem of trust? Not only is it much cheaper overall for the most powerful companies to be subject to oversight, it is also almost impossible to retrospectively untangle the spaghetti of data, models and algorithmic adjustments. 

On the upside, companies too should benefit from verified oversight in a sector that remains plagued with problems of safety, discrimination, fakes and hallucinations (a nice way of saying AI lies).

EU negotiators involved in the AI Act are suggesting a tiered approach to curbs on foundation models, so that the level of regulation is gradual, matching the impact of the models. That means only a small number of companies will come under its scope, allowing others to grow before being hit by regulation. And as foundation models are still evolving, the new rules should leave room for adjustment as the technology changes.

It is not surprising that the final phases of the negotiations are trickiest — this is always the case. Those worrying that the tensions may spell the end of the AI Act seem unaware of how legislative battles unfold. For all EU stakeholders, a lot is at stake and the world is watching to see what law the EU ends up voting for. Let’s hope political leaders manage to agree on a future-proof set of rules. With big tech power should come big responsibility, no matter whether French, American or German engineers built the system.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy