On June 5 1944, a courier from Britain’s Bletchley Park code-breaking centre interrupted a D-Day planning session and handed a top-secret message to General Dwight Eisenhower. After reading the slip of paper, the Supreme Allied Commander in Europe declared: “We go tomorrow.”
The message contained a decrypted German radio transmission from Adolf Hitler telling his top commander in France that the imminent Allied invasion in Normandy was a feint; the subsequent delay in redeploying German troops proved crucial in allowing Allied forces to secure their beachheads. The technology that enabled the decryption was the world’s first electronic programmable computer. Called Colossus, it was designed by Tommy Flowers, an unassuming English Post Office engineer.
That episode was the first example of a computer having a decisive impact on world history, Nigel Toon suggests in his sparky new book on artificial intelligence, How AI Thinks. But it was only a foretaste of what followed; in the subsequent eight decades, computers have become exponentially more powerful and have extended their reach into almost every aspect of our lives.
A revolution in computer hardware has been followed by a similar revolution in software — especially, recently, the rapid development of AI. Since the San Francisco start-up OpenAI launched its ChatGPT chatbot in November 2022, millions of users have experienced first-hand the near-magical powers of generative AI. At the click of a mouse, it is now possible to prompt plausible Shakespearean sonnets about a goldfish, or generate fake photographs of the Pope in a puffer jacket, or translate one computer code into another. All three books reviewed here highlight the enormous promise of the technology, but also warn of dangers of its misuse.
Toon is a card-carrying enthusiast, arguing for benefits in fields as varied as weather forecasting, drug discovery and nuclear fusion. “Artificial intelligence is the most powerful tool that we have ever created,” he writes. “Those who take the time to understand how artificial intelligence thinks will end up inheriting the Earth.”
As the co-founder of the British semiconductor start-up Graphcore, which designs chips for AI models, Toon works at the forefront of the technology, yet even he admits to being constantly surprised by how fast things have developed. How AI Thinks provides a brisk and well-grounded introduction to how AI has evolved since the term was first coined in 1955, how it is used today and how it may (we hope) be controlled.
Modern semiconductor devices are the most advanced products that humans have ever made. Since the first integrated circuit was invented in 1960, there has been a 25bn-fold increase in the number of transistors that can fit on a single chip. “If your car had improved this much, you would now be able to easily travel around 200 times the speed of light,” Toon writes.
He is also adept at explaining the ensuing software revolution that enabled AI researchers to move beyond rules-based computing and expert systems to the pattern recognition and “learning” capabilities of neural networks that power our AI models today. When let loose on the vast mass of data that has been generated since the creation of the World Wide Web, these models can do wondrous things. By 2021 all our connected devices were generating about 150 times more digital information a year than had ever existed before 1993.
Yet no matter how powerful computers have become, they still struggle to match the extraordinary processing power of the 86bn neurons in the typical human brain. Humans have a phenomenal ability to generalise from fragments of data and contextualise seemingly random information. Toon recalls sitting in the back of a black London cab in 2021 when the driver told him, “Did you hear about Ronaldo? They’ll play much better now, don’t you think? I heard that it was the governor that made it happen. City must be really upset.”
Intuitively, Toon understood that the driver was referring to the world-famous footballer Cristiano Ronaldo having just been lured back to Manchester United by the former manager Sir Alex Ferguson, to the annoyance of the club’s rivals Manchester City.
At least for the moment, an AI system would struggle to make sense of such banter. As Toon explains, driving is itself another great example of the flexibility of human intelligence. A learner driver takes about 20 hours of tuition to become consciously competent. By contrast, Waymo, Alphabet’s autonomous driving company, clocked up 2.9mn driving miles in California in 2022 and had still not reached a comparable performance.
Where Toon is less sure-footed is in exploring the regulatory and policy debates that surround the use of AI. This is where Verity Harding picks up the baton in AI Needs You. A former special adviser to Nick Clegg when he was Britain’s deputy prime minister and an ex-head of policy at Google DeepMind, Harding is bilingual in politics and tech. Her aim is to examine how we have regulated important technologies in the past in order to guide us as to how best to control AI in the future.
The three high-stakes international examples she chooses — the cold war space race, in vitro fertilisation and the diffusion of the internet — all contain important lessons, shedding interesting light on how we should approach AI. Harding hails the United Nations Outer Space Treaty of 1967, which established space as a “province of all mankind”, as a remarkable example of international co-operation. Signed when tensions between the US and the Soviet Union were near their height, the treaty was called “the Magna Carta of space”, preventing militarisation and ensuring that no nation could claim sovereignty over any celestial body.
For Harding, the treaty holds three lessons. Political leadership matters, and courageous politicians can forge mutually beneficial international agreements, even during times of geopolitical stress. Rivalrous powers can set limits on the worst excesses of warfare. And science can — and should — be used to encourage international co-operation. In that sense, AI researchers should work on projects that benefit all humanity and not just pursue “techno-nationalistic fence-building”.
The debates about embryo research and in vitro fertilisation in the 1970s and 1980s raised very different issues. But in many respects, Harding argues, they anticipated a lot of the ethical, moral and technical issues that surround AI. The philosopher Mary Warnock, who chaired a committee to consider these dilemmas, did a remarkable job in delineating clear moral lines and practical avenues for regulation in her report published in 1984. These rules have since enabled some 400,000 IVF babies to be born in the UK, and encouraged the development of a vibrant life sciences industry. Contrary to the familiar trope that regulation kills innovation, Harding argues that in fact the political, moral and legal clarity provided by the Warnock commission spurred investment and economic growth.
Harding’s third example is the extraordinarily influential but little-known technocratic institution called the Internet Corporation for Assigned Names and Numbers (Icann). In maintaining the “plumbing” of the internet and resisting the intrusions of nation states and powerful private companies, Icann has preserved the World Wide Web as an open and dynamic space. “It’s a trust-based, consensus-based, global organisation with limited but absolute power. In an age of cynicism and bitter, divisive politics, it’s a marvel,” she writes.
Describing her book as a “love letter” to the unglamorous, painstaking work of policymaking in a democracy, Harding urges politicians — and civil society — to get involved in debates about the uses of AI and help to shape the future in a positive way. Martin Luther King Jr’s 1964 Nobel Peace Prize acceptance speech on the need for moral intervention should be taped to every tech CEO’s wall: “When scientific power outruns moral power, we end up with guided missiles and misguided men.”
The authors of As If Human are also concerned about the human dimension of technology and ensuring that machines do our bidding and do not run out of control. Sir Nigel Shadbolt, a professor of computer science at Oxford university, and economist and former civil servant Roger Hampson explore the ethics of AI in their elegant and erudite book. Their contention is that we should always treat machines as if humans were attached and hold them to the same, if not higher, standards of accountability: “We should judge them morally as if they were humans.”
The pair argue that we need better technological tools to manage our personal data as well as new public institutions, such as data trusts and co-operatives, that can act as stewards of the common good. “It is outrageous that a possibly civilisation-changing technology has been launched at the behest of big corporations, with no consultation with the public, governments, or international agencies,” they write.
Concluding with seven “proverbs”, they suggest how we should approach our AI future, emphasising the need for transparency, respect and accountability. Their starting principle is that “a thing should say what it is and be what it says” and always remain accountable to humans. But one proverb in particular encapsulates the spirit of their book: “Decisions that affect a lot of humans should involve a lot of humans.”
Although these three books differ in focus, tone and emphasis, they come to similar conclusions. All these authors stress the benefits that AI can bring if used wisely, but worry about the societal stresses that will result from rapid or reckless deployment of the technology. They all downplay, if not dismiss, fears over existential risk that some AI researchers have flagged, for the moment viewing them as a speculative concern rather than a here-and-now worry. Of more concern to them is the excessive, and unprecedented, concentration of corporate power in the hands of a small coterie of West Coast executives.
The overwhelming message that emerges from these books, ironic as it may seem, is a newfound appreciation of the collective powers of human creativity. We rightly marvel at the wonders of AI, but still more astonishing are the capabilities of the human brain, which weighs 1.4kg and consumes just 25 watts of power. For good reason, it has been called the most complex organism in the known universe.
As the authors admit, humans are also deeply flawed and capable of great stupidity and perverse cruelty. For that reason, the technologically evangelical wing of Silicon Valley actively welcomes the ascent of AI, believing that machine intelligence will soon supersede the human kind and lead to a more rational and harmonious universe. But fallibility may, paradoxically, be inextricably intertwined with intelligence. As the computer pioneer Alan Turing noted, “If a machine is expected to be infallible, it cannot also be intelligent.” How intelligent do we want our machines to be?
How AI Thinks: How We Built It, How It Can Help Us, and How We Can Control It, by Nigel Toon, Penguin £22, 320 pages
AI Needs You: How We Can Change AI’s Future and Save Our Own, by Verity Harding, Princeton University Press £20, 288 pages
As If Human: Ethics and Artificial Intelligence, by Nigel Shadbolt and Roger Hampson, Yale University Press £20, 272 pages
John Thornhill is the FT’s innovation editor
Join our online book group on Facebook at FT Books Café and subscribe to our podcast Life & Art wherever you listen
Read the full article here