Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
A decade ago, three researchers at the University of Pennsylvania coined the term “algorithm aversion” to describe how people instantly distrusted a weather forecasting programme as soon as it made a mistake — even when it was patently more accurate than the human forecasters pitted against it.
This has remained at the back of FT Alphaville’s hivemind whenever we’ve written posts on whether ChatGPT can day trade; pass the CFA exam; obtain an economics degree; decipher central banking babble or get a job as a sellside analyst.
In other words, while we keep asking whether artificial intelligence can do all this stuff, we should also ask whether we actually trust it to. That’s pertinent given the squillions that companies are spending on AI infrastructure. Will it be wasted if people fundamentally don’t trust or like the results, even if they’re good?
Which is why this new paper by Gertjan Verdickt and Francesco Stradi is so interesting. Here’s the abstract:
Do investors trust an AI-based analyst forecast? We address this question through four incentivized experiments with 3,600 U.S. participants. Our findings highlight that, although investors update their return beliefs in response to the forecast, they are less responsive when an analyst incorporates AI. This reduced trust stems from a lower perceived credibility in AI-generated forecasts. We reveal other important nuances: women, Democrats, and higher AI literacy investors are more responsive to AI forecasts. In contrast, AI model complexity reduces the probability of return updating. Additional manipulations show that forecast providers do not amplify reactions to their content. Overall, our findings challenge prevailing notions about AI adoption in financial decision-making.
Here’s how it worked: Verdickt and Stradi assigned a bunch of Americans to three different groups to see how much they trusted a purely human-produced Goldman Sachs stock market forecast, a purely “advanced AI model”-produced forecast, and a forecast made by “analysts of Goldman Sachs incorporating an advanced AI model”. Otherwise, the reports were identical.
The researchers then examined how the forecasts affected the participants’ own prior expectations. And lo, AI-written or aided reports turned out to be less influential than those authored by humans.
There were some interesting nuances though, as the abstract nods to.
Specifically: women, people who gave their political affiliation as Democrat, and those with greater AI familiarity were more likely to update their own forecasts when they were markedly out of whack with what the supposedly machine-written or assisted report said:
. . . On average, women exhibit a greater propensity to positively update their return belief to AI-generated forecasts, particularly when they have larger initial misperceptions. Indeed, while the average investor moves away from the signal from AI sources, female investors seem to update their beliefs toward the signal.
. . . Democrats are more likely to update their return beliefs in line with AI forecasts, particularly when there is a larger gap between their prior beliefs and the forecast. In other words, Democrats are more receptive to AI-generated forecasts.
. . . Finally, higher AI literacy corresponds with a significantly larger return belief update toward the signal when receiving a Man + Machine forecast.
There are probably a lot of peoples’ priors confirmed here.
It’s also notable how people became more distrustful the more complex the method sounded. So an ordinary least-squares regression was more influential than deep learning techniques or a best linear unbiased estimator.
In other words, if you’re going to use AI to help produce your sellside research then don’t trumpet the fact too loudly, and call the model something like Ye Olde AdaBoost or Homespun Learning.
Read the full article here