Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Two topics dominated the World Economic Forum annual meeting in Davos last week: Donald Trump and artificial intelligence. Of the two, the latter was the more interesting and almost certainly the more significant. Much attention in the discussion was devoted to DeepSeek, the surprise Chinese upstart. Yet we have merely learned that knowledge spreads: no country is going to monopolise these new technologies. This has surprised markets. With new technologies, such “surprises” are not surprising. But it does not change the big question, which is what advancing machine intelligence means for us all. (See charts.)
Human beings are both social and intelligent. This combination is their “killer app”. It has allowed them to dominate the planet. Human intelligence invented the general purpose technologies that shaped the world, from the taming of fire to the creation of computers. But, with computers that think, this might change. Blaise Pascal, the French 17th-century mathematician and philosopher, said that “Man is but a reed, the most feeble thing in nature, but he is a thinking reed.” Is that uniqueness now coming to an end?
In Davos, I attended two fascinating discussions of the rewards and risks of advances in AI. One was an interview of Sir Demis Hassabis, co-founder of Google DeepMind and joint recipient of the Nobel Prize for chemistry, by Roula Khalaf, editor of the FT. The other was an interview of Dario Amodei, founder and CEO of Anthropic and author of Machines of Loving Grace, by Zanny Minton Beddoes, editor of The Economist.
The interview with Hassabis underlined the recent advances in our ability to do scientific analysis, especially in biology. More than 2mn researchers use AlphaFold, he said, the programme DeepMind developed. “We folded all proteins known to science, all 200mn . . . [T]he rule of thumb is it takes a PhD student their entire PhD to find the structure of one protein. So 200mn would have taken a billion years of PhD time. And we’ve just given that all to the world, for free.” This, he elaborated, is “science at digital speed”. The possibility that has opened before us, then, is of a huge acceleration in medical progress. Indeed, we might have the next 50 to 100 years of normal progress in five to ten years.
Broadly, argued Amodei, we can envisage AI as “a country of geniuses in a datacentre”, one that the Chinese might just have made even cheaper than before. Yet are these truly geniuses? My test would be whether, given knowledge of all physics up to 1906, but nothing afterwards, AI would be able to produce Einstein’s general theory of relativity.
It seems plausible that the impact of such problem-solving capacity, whether “genius” level or not, should be remarkable. It could, among other things, accelerate improvements in knowledge and so productivity growth and the spread of prosperity. Both are desirable. In recent decades, increases in “total factor productivity” — the best measure of technical progress — have been modest. Moreover, huge numbers still live in extreme poverty and, depressingly, progress has slowed.
Yet it is also evident that accelerated progress could also create difficulties. The structure of the labour market might change massively, for example, with, in this case, a sharp fall in demand for workers whose asset is trained, but largely routine, intelligence. Forecasts of such effects vary. A 2023 paper by Erik Brynjolfsson and Gabriel Unger notes that, as has been true throughout the computer revolution, effects on productivity might be modest. Yet this time just might be different, with soaring productivity, but correspondingly large and disruptive economic and social changes. Again, depending on how society responds, successful AI might lead to “techno-feudalism”, with even greater concentrations of wealth. Invention of vast numbers of new treatments might greatly increase the costs of healthcare and also the costs of coping with much-extended lives even if on balance they are healthier ones. Are people ready to live alongside their great, great grandparents? Thus, apparently good things might create real challenges.
Beyond this, the development of the envisaged AI creates big risks. How does one control its use by rogue actors, including hostile states, terrorists and mass murderers? What moral judgments does one allow AI to make in warfare? How does one control the use of AI in surveillance? Will “big brother” be watching us forever more? Again, what do we do about the manufacture of fakes and fake news? How does freedom survive all these threats?
Hassabis is clear that we need effective global limits on the use of AI. In an age of broken international co-operation and scorn at the very idea of a “rules-based international order”, will China and the US work together on making AI safe? It seems unlikely, not least because they have different views on how such technologies should be used.
Back in 2015, I wrote a generally sceptical article on the (modest) likely impact on productivity of new technologies. The next few years might at last prove me wrong. Yet I also noted that if we were instead approaching “the singularity” — artificial intelligence surpassing all human intelligence — everything must change.
One of the great ideas in Frank Herbert’s Dune series is that in the distant past (our future) humanity waged a successful jihad against machines that think. Thereafter, humans had to become superhuman. A leading character explains that “Humans had set those machines to usurp our sense of beauty, our necessary selfdom, out of which we make living judgments. Naturally, the machines were destroyed.”
That concern might prove wise. But I am realistic: AI is out of Pandora’s Box.
martin.wolf@ft.com
Follow Martin Wolf with myFT and on Twitter
Read the full article here