When the South Korean political activist Kim Dae-jung was jailed for two years in the early 1980s, he powered his way through some 600 books in his prison cell, such was his thirst for knowledge. One book that left a lasting impression was The Third Wave by the renowned futurist Alvin Toffler, who argued that an imminent information revolution was about to transform the world as profoundly as the preceding agricultural and industrial revolutions.
“Yes, this is it!” Kim reportedly exclaimed. When later elected president, Kim referred to the book many times in his drive to turn South Korea into a technological powerhouse.
Forty-three years after the publication of Toffler’s book, another work of sweeping futurism has appeared with a similar theme and a similar name. Although the stock in trade of futurologists is to highlight the transformational and the unprecedented, it is remarkable how much of their output appears the same.
The chief difference is that The Coming Wave by Mustafa Suleyman focuses more narrowly on the twin revolutions of artificial intelligence and synthetic biology. But the author would surely be delighted if his book were to prove as influential as Toffler’s in prompting politicians to action.
As one of the three co-founders of DeepMind, the London-based AI research company founded in 2010, and now chief executive of the AI start-up Inflection, Suleyman has been at the forefront of the industry for more than a decade. The Coming Wave bristles with breathtaking excitement about the extraordinary possibilities that the revolutions in AI and synthetic biology could bring about.
AI, we are told, could unlock the secrets of the universe, cure diseases and stretch the bounds of imagination. Biotechnology can enable us to engineer life and transform agriculture. “Together they will usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen,” he writes.
But what is striking about Suleyman’s heavily promoted book is how the optimism of his will is overwhelmed by the pessimism of his intellect, to borrow a phrase from the Marxist philosopher Antonio Gramsci. For most of history, the challenge of technology has been to unleash its power, Suleyman writes. Now the challenge has flipped.
In the 21st century, the dilemma will be how to contain technology’s power given the capabilities of these new technologies have exploded and the costs of developing them have collapsed. “Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible,” he writes.
Suleyman’s worry list is long and partly interconnected. AI can empower disinformation and cyberwarfare on an industrial scale, direct swarms of killer drones against civilian targets and result in massive economic disruption and job churn. Synthetic biology can be exploited to create deadly low-cost pathogens. Governments have largely failed to understand these technological threats, let alone prepare their societies for the scale of this turmoil.
But Suleyman admits the tech industry has also failed to pay due care and attention to the collateral damage caused by their products, which has contributed to a widening of economic inequality and an erosion of trust in democracy. “I regard the often dismal picture painted in the following chapters as a titanic failure of technology and a failure of people like me who build it,” he writes. But having adopted the ethos of the West Coast, where he now lives, his failure is clearly regarded as a badge of honour rather than a disqualification to be taken seriously.
Given the technological wave that is about to crash over us, the ordinary reader might conclude, like many technologists in a recent open letter, that we should pause development of leading-edge generative AI models until we are better prepared. “Absolutely not,” Suleyman responds. “Make no mistake: standstill in itself spells disaster.” Solving climate change or raising living standards or improving education is apparently only going to happen if new technologies are part of the package, he asserts.
Filled with sweeping generalisations and extreme prognostications, parts of Suleyman’s book reads like a ChatGPT rewrite of Yuval Noah Harari’s history of the future, Homo Deus.
And, in truth, most of the issues explored in the early sections of this book have been examined elsewhere with greater insight and in less extravagant prose. If you want a more nuanced view of the challenges of controlling AI, then try Stuart Russell’s Human Compatible (2019). For gene editing, read A Crack in Creation (2018) by the Nobel laureate Jennifer Doudna. And for the geopolitical implications of AI, the veteran diplomat Henry Kissinger’s The Age of AI (2021) is a more authoritative source.
Where Suleyman’s book is most valuable is in the concluding section, in which he outlines 10 steps towards possible containment. His suggestions would provide a useful primer for any official attending the British government’s forthcoming conference on AI governance at Bletchley Park, even if they marginalise some of the issues flagged by ethics researchers.
Some of Suleyman’s recommendations are technical, sensible and readily implementable. At present, fewer than 1 per cent of the world’s 30,000 plus AI researchers work on safety issues. Tech companies and universities should indeed invest more in this area, as Suleyman urges them to do. It would also help if greater efforts were made to scrub the data sets used to train AI models for inherent societal biases and focus more on the explainability and corrigibility (the ability to correct errors) of these models. If possible, it would make sense to build “bulletproof off switches” into synthetic biology, robotics and AI systems.
The industry would also benefit from external, expert auditors. Profit-seeking companies, who are driving the technology, might also take a broader view of their societal responsibilities if they were to reincorporate as “global interest companies” as, intriguingly, he notes DeepMind once considered.
Governments could play an important role to play by restricting access to the leading-edge chips that power state-of-the-art AI models and banning open-source AI models outright for fear they may be abused by bad actors. Unlike many US-based technologists, Suleyman welcomes the EU’s forthcoming AI Act, flawed as it is, for having the right focus and ambition in classifying risks to users. International treaties will be needed to write new rules of war. Civil society will be instrumental in holding the tech companies to account and shaping new norms for gene editing.
There is no shortage of activity in some of these areas. The OECD’s policy observatory counts no fewer than 800 AI policies across 60 countries in its database. But Suleyman is right to stress the importance of co-ordination and coherence. And that revolves around human agency and the messy art of politics.
How AI will itself transform politics is the subject of The Handover by David Runciman, professor of politics at Cambridge university. Runciman’s ingenious argument is that AI does not represent anything fundamentally new when viewed through the prism of political science. The process of ceding power to inhuman external entities has been going on for centuries. “For hundreds of years now we have been building artificial versions of ourselves, endowed with superhuman powers and designed to rescue us from our all-too-human limitations,” he writes. “The name for these strange creatures is states and corporations.” In this sense, he argues, these two institutions are the forerunners of AI.
In describing the Leviathan, the English philosopher Thomas Hobbes was the first to identify the state as a “mortal God” or an “artificial man”. Or, to use the modern parlance of AI, the state is an artificial neural network, suggests Runciman. States and corporations can be best viewed as social machines. Not for nothing do we talk about the machinery of government that outlasts elected governments.
In Runciman’s telling, these inhuman agencies have played an extraordinary role in helping humanity realise its collective goals, adjudicate between competing societal interests and satisfy consumer demands. It was these institutions that enabled the great acceleration of the industrial revolution. They have since become repeatable, mechanical and adaptable. “One way to sum up what changed is that we swapped an arbitrary existence for an artificial one,” he writes.
But, as we know, states and corporations can also be turned to terrible, or perverse, outcomes, especially when their survival comes under threat. Consider the murderous intent of the Nazi and Soviet states or the fact that 100 companies account for 71 per cent of global industrial greenhouse gas emissions generated since 1988.
There is much to enjoy about Runciman’s book. It is certainly a well-informed and provocative read about the essence of political power. But in parts it reads like a Cambridge seminar that has run out of control after a few too many sherries. Runciman argues that states and corporations brought about humanity’s First Singularity, referring to the theoretical term used by technologists to describe a runaway super intelligence and a loss of human control.
AI might therefore be better described as the Second Singularity. However, the critical distinction, that Runciman does not fully explore, is that states and corporations are still animated by human actors and cannot exercise agency on their own. The fear about the Second Singularity, if it ever happens, is that an AI might assume so much agency as to render humans obsolete.
Both books, though, suffer from a massive blind spot in that China, one of the world’s two technology superpowers, is mostly offstage. It is clear that the Chinese Communist party has a very different model of development to western democracies and is intent on keeping the state, corporations and AI under its control.
That shortcoming is addressed in a thoroughly researched academic tome by Anu Bradford in Digital Empires, which compares and contrasts the three competing regulatory regimes in the US, Europe and China.
As Bradford describes it, the US has pioneered a largely market-driven model of technological development encouraging the emergence of globally dominant companies. By 2021, the combined market capitalisation of Apple, Alphabet, Microsoft and Meta exceeded the value of the 2,000 companies listed on the Tokyo Stock Exchange. But Bradford, a professor at Columbia Law School, is unsparing in her criticisms of the failings of this approach and pleas for stronger privacy, data protection and antitrust laws.
She is also critical of China’s state-driven regulatory regime. In her view, China has succeeded in converting the internet from a tool for advancing democracy into one that services autocracy and has been exporting that model abroad. Beijing has now prioritised the development of deep tech, such as AI, quantum computing and synthetic biology, to spur its economic development. But it also uses facial recognition technology and data processing techniques to surveil its citizens at mass scale to maintain social control.
Between these overly permissive and overly oppressive regimes, the Finnish-American Bradford contends that the EU has mostly got it right in emphasising a rights-driven approach and becoming the global norm for data privacy and, soon, AI regulation. She strongly rejects the argument that the EU’s regulation has stifled technological innovation. Europe’s poor record in building globally relevant tech companies owes more to the incompleteness of the single market, its lack of dynamic capital markets and a poverty of ambition, she argues with much merit.
Bradford’s big hope is that the US and Europe can blend their two approaches to produce a tech industry that is both dynamic and appropriately regulated. The global battle for the future will, she argues, be waged between techno-democracies and techno-autocracies. That is not a battle that the US or the EU — or any liberal democracy — can afford to lose.
The Coming Wave: AI, Power and the Twenty-First Century’s Greatest Dilemma by Mustafa Suleyman with Michael Bhaskar Bodley Head £25/ Crown $32.50, 352 pages
The Handover: How We Gave Control of Our Lives to Corporations, States and AIs by David Runciman Profile Books £20/ Liveright $30, 336 pages
Digital Empires: The Global Battle to Regulate Technology by Anu Bradford Oxford University Press £30.99/$39.99, 352 pages
John Thornhill is the FT’s innovation editor
Join our online book group on Facebook at FT Books Café
Read the full article here