Why it is too soon to call the hype on AI’s productivity promise

0 0

Stay informed with free updates

Even the smartest experts have a hard time predicting the future of technology. Consider the example of Bob Metcalfe, the inventor of Ethernet who, in 1995, boldly forecast that the internet would experience a catastrophic collapse — or a “gigalapse” — the following year.

But, when he got it wrong, Metcalfe literally ate his own words. To chants of “Eat, baby, eat!” at a tech industry event, Metcalfe ripped up a copy of his future-gazing InfoWorld column, fed it into a blender, and consumed the resultant pulp.

Metcalfe’s unhappy experience — accepted with good grace and humility — is one of dozens of examples of erroneous predictions contained in the illuminating online resource that is the Pessimists Archive. Spanning the invention of the camera, electricity, aeroplanes, television and the computer, the archive records the many fanciful ways in which successive generations of technological experts have been dead wrong.

It is worth browsing the archive when considering the torrent of predictions about the wonder technology of our age: artificial intelligence.

The only certain prediction is that the vast majority of these predictions will be overblown. Those optimists who forecast that AI will imminently usher in a glorious new era of radical abundance seem likely to be disappointed. But those pessimists who predict that AI will soon lead to human extinction are no less likely to be wrong. Then again, no one will be around to congratulate them if they are right.

With AI, it is perhaps easier to establish the direction of travel than the speed of the journey. Just as the industrial revolution magnified brawn, so the cognitive revolution is magnifying brain. AI is best viewed as the latest general-purpose technology that can be applied to an infinite number of uses, says Arkady Volozh, founder of the Amsterdam-based start-up Nebius, which builds and runs AI models for customers across a range of industries. 

“AI is like electricity or computers or the internet,” he says. “It is like a magic powder that can be used to improve everything. More and more functions will be automated more efficiently. Just as an excavator is more powerful than a person with a shovel, you can automate routine operations with AI.”

However, it has often been the case with previous general-purpose technologies, such as railways and electricity, that it can take decades before they boost productivity. New infrastructure has to be built. New ways of working have to be adopted. New products and services have to be launched.

In the meantime, the adoption of new technologies can actually suppress productivity for a while as companies and their employees adapt to new ways of working. Indeed, new technologies can even produce an increase in unproductive work: how many pointless emails have you read today?

Some economists have described this phenomenon as a J-curve — as productivity first dips, before it later surges.

“General-purpose technologies, such as AI, enable and require significant complementary investments, including co-invention of new processes, products, business models and human capital,” the economists Erik Brynjolfsson, Daniel Rock and Chad Syverson argue in a National Bureau of Economic Research paper. These complementary investments are often poorly captured in the official economics statistics and can take a long time to show up in higher productivity growth.

Zooming out even further, it may be wrong to talk about AI as a separate revolution rather than as a continuation of the information technology revolution that began in the 1970s. According to an essay this year by the economic historian Carlota Perez: “A revolutionary technology is not the same thing as a technological revolution.”

In her 2002 book Technological Revolutions and Financial Capital, Perez identified five great technological transformations, beginning with a wave of creative destruction followed by a mass diffusion of innovation and a golden age of economic growth. This pattern has periodically repeated itself: starting with the Industrial Revolution in the 1770s; followed by the steam and railway age of the 1830s; the electricity and engineering age of the 1870s; the mass production era of the 1910s; and our own current IT revolution.

All of these technological revolutions have been accompanied by transformations of government and society, resulting in the creation of new institutions, such as trade unions, regulatory agencies and welfare states, to help manage tumultuous change.

Now, in Perez’s view, we are only just beginning to imagine the institutions needed to deal with our current IT revolution and to counter economic inequality, autocratic populism and climate-related disasters. “Changing this broader political-economy context has become the most urgent task of our time,” she argued earlier this year.

Designing appropriate new institutions will be a serious challenge — even with the help of AI.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy