Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is a barrister and the author of ‘The Digital Republic: Taking Back Control of Technology’
Last month’s boardroom carnage at OpenAI — in which the chief executive, Sam Altman, was toppled and reinstated in less than a week — showed the company to be riven by the same question that is set to dominate politics in 2024: to what extent should increasingly capable AI systems be curtained by government regulation?
This is ultimately a question for politicians rather than corporations. And the UK government has recently found its answer: safety first. The regulatory landmarks of 2023 were the Online Safety Act and the AI Safety Summit. “Safety” was the top priority for both. The act followed the suicide of 14-year-old Molly Russell, who had been served a torrent of online content related to self-harm. It aimed to make Britain “the safest place in the world to be online”. The summit got the regulatory ball rolling in a similar direction for AI. The Bletchley Declaration contained seven references to “safety”, and the UK’s AI Safety Institute is now actively recruiting.
It makes sense for the government to play the role of safety monitor. Powerful technologies pose risks, and these must be managed collectively rather than leaving people to fend for themselves. Just as we aren’t asked to inspect the wiring of an aeroplane before we board, we shouldn’t be invited to “consent” to dazzling technologies whose workings we scarcely understand.
But the safety paradigm will only take us so far. In the future, regulation will not be as simple as promoting the good and mitigating the bad bits of new technologies. That’s because it will be hard to agree about what is good or bad in the first place.
Take, for example, the dispute that recently rocked the US movie industry. Screenwriters and actors went on strike, partly to secure assurances that AI would not replace them. Their concerns were well-founded. AI systems can increasingly generate prose that rivals the most talented humans, and AI-generated effects will increasingly be able to supplant the faces, bodies, and voices of human actors.
Below the surface, however, this was more than an industrial dispute. It revealed a deeper disagreement about the nature of film itself. Is the point of cinematic art to provide a living for people in the film industry? Or is it to provide stimulation and joy for consumers? When these aims collide, which matters more?
Imagine a future in which families can simply command their TV to generate a Hollywood-quality film. They could choose the genre and optimise for comedy, violence or sex. On the current technological trajectory, this is not far-fetched.
Some future connoisseurs might still prefer human-made productions, just as some today prefer the films of the 1960s to contemporary CGI. But many others would see this as a revolutionary democratisation of cinema. Instead of having to order from Hollywood’s menu, people could summon forth their own aesthetic universes. Our grandchildren may well marvel that, in 2023, films were still handcrafted by humans; just as we find it eccentric that, in the past, every wheel was made bespoke by a wheelwright.
A similar debate is playing out in the world of literature. Margaret Atwood and Stephen King are concerned that their works are being used to train AI systems, which can (in Atwood’s term) “glurp forth” prose on command. Their frustration is understandable, particularly when they receive no royalties.
But from humanity’s perspective, would it not be remarkable to be able to generate new novels in the prose of Atwood, masterpieces in the style of Mozart, or films in the manner of Hitchcock — or perhaps new artworks combining the talents of all three — long after those geniuses have departed the Earth? Is art’s purpose merely to venerate and compensate artists, or to provoke aesthetic stimulation and cultural advance?
These aren’t easy questions — and that’s the point. I rather doubt the board of OpenAI has the answers to them. And they can’t simply be answered by reference to “safety” either. These debates are about values. They ask us to choose, in Amos Oz’s words, between “right and right”.
As AI improves, no field of human activity will be untouched by this kind of controversy. We will disagree about the use of non-human “voices” in political debate. We will fret about machines that form emotional or erotic bonds with their users. In law, medicine, education and war, new ethical problems will erupt and demand resolution.
Regulating technology is about safety, but it is also about the kind of civilisation we wish to create for ourselves. We can’t leave these big moral questions for AI companies to decide.
Read the full article here