How to tame AI’s wild frontier

0 0

Unlock the Editor’s Digest for free

In late 1943, Colossus Mark 1, the world’s first programmable computer, arrived at Bletchley Park, where Alan Turing and others were engaged in a secret mission to break German ciphers. Eighty years on, politicians and tech executives will descend on the Buckinghamshire estate next week to discuss another world-changing innovation, of which Turing was an early theorist: artificial intelligence. They will focus on some of the technology’s most cutting-edge forms, and the threats they could pose — from triggering social unrest by increasing unemployment to helping devise lethal bioweapons.

Britain’s Conservative prime minister Rishi Sunak has seized on AI as an area where the UK can potentially project global influence, unveiling plans this week to create what he called the “world’s first AI safety institute” in Britain. This would investigate the capabilities and dangers of new types of AI, and share its work with the world.

Yet Sunak’s Bletchley Park summit is a worthwhile endeavour whose importance goes beyond any one country. The debate over AI safety to date has been a messy tug of war between private industry, civil society, government departments and regulators. The meeting — to which China has rightly been invited, despite misgivings, particularly from the US — is a chance to start building international collaboration and understanding around the potentially epochal technology.

The UK premier’s approach is that there should not be a “rush to regulate” the sector seems sensible. True, AI technology is developing so fast that new capabilities and applications — as well as dangers — are constantly emerging that could make rules outdated almost as soon as they are adopted. But given the risks in frontier AI — which innovators themselves admit to — the Bletchley Park summit is a good place to begin the discussion on the most effective way to regulate.

The meeting is expected to launch a process by which governments, public servants, industry players and scientific researchers work together to understand better what is happening at the frontiers of AI technology, and the best approaches to safeguarding it. There is much groundwork to be laid, including defining key terms. “Existential risks,” for example, could refer to anything from mass misinformation to godlike AI taking over.

The summit needs to ensure that the tech industry plays its part in helping to flag and manage hazards — since its own experts are often best placed to do so. Sunak will propose a global panel of experts, nominated by countries and organisations attending the summit, to publish a report on the “state of AI”. Such a body, modelled on the UN’s Intergovernmental Panel on Climate Change, has also been backed by leading tech executives. The industry’s role, however, must not become a way for companies to escape responsibility for harms technology may cause.

Above all, the meeting should begin to establish an international framework for regulation that avoids a further “Balkanisation” of rules and regulatory capture. The EU has characteristically taken a prescriptive approach in its AI Act, expected to be approved by the end of the year, which divides potential uses into risk categories. China has adopted several national regulations on aspects of AI. The White House has secured voluntary commitments from tech companies on managing risks.

The discourse around AI can often feel highly polarised between full-blown techno-optimism and doom-filled proclamations about the end of humanity. As with every technology before it, the truth lies somewhere in the middle, even if the stakes are higher. It would be a shame to stifle the potential good, just as it would be dangerous to regulate without knowing the biggest risks. Overseeing AI may be an enigma, but it can be cracked through collaboration.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy