Receive free High frequency trading updates
We’ll send you a myFT Daily Digest email rounding up the latest High frequency trading news every morning.
We bet most CEOs secretly love doing earnings calls. Who doesn’t like the idea of a dozen smart but access-seeking analysts saying “great quarter guys!” every three months.
But they can be stressful as well. You don’t just want to avoid saying anything dumb, you actually want to avoid saying anything even meaningful. Which is why this paper by John Bai, Nicole Boyson, Yi Cao, Miao Liu and Chi Wan is so delightful:
A significant portion of information shared in earnings calls is conveyed through verbal communication by corporate managers. However, quantifying the extent of new information provided by managers poses challenges due to the unstructured nature of human language and the difficulty in gauging the market’s existing knowledge. In this study, we introduce a novel measure of information content (Human-AI Differences, HAID) by exploiting the discrepancy between answers to questions at earnings calls provided by corporate executives and those given by several context-preserving Large Language Models (LLM) such as ChatGPT, Google Bard, and an open source LLM. HAID strongly predicts stock liquidity, abnormal returns, number of analysts’ forecast revisions, analyst forecast accuracy following these calls, and propensity of managers to provide management guidance, consistent with HAID capturing new information conveyed by managers. Overall, our results highlight the importance of using LLM as a tool to help investors unveil the veiled — penetrating the information layers and unearthing hidden insights.
OK this is a bit of a waffle. Luckily, Matt Levine has already summed up the findings better than we could. Here’s his take:
-
Some earnings calls were pretty close to what ChatGPT would come up with, that is, not a lot of new information in the Q&A.
-
Other earnings calls were not: Executives gave answers to analyst questions that the chatbot would not have predicted.
-
The non-robotic earnings calls were more informative than the robotic ones: The stock moved more (up or down) after the earnings call, and analysts’ future earnings forecasts were more accurate, when executives said stuff that chatbots would not have predicted.
But this is bad! CEOs don’t want to be informative. They don’t want accurate forecasts. And they certainly don’t want their company’s stock moving around a lot based on some earnings-call brain fart.
Yes, the theory behind regular earnings calls with analysts or “investor days” is that senior executives can better inform the investment world about their fascinating company and its vibrant prospects — going deeper than what they can glean from the numbers and other public information etc etc. They’re an integral part of the theatre of being a public company.
But in practice you don’t really want to give away anything too revealing either, whether good or bad. Even if you say something true and positive that lifts your stock it just makes your job harder by raising expectations. Better to smash forecasts when the results are in. As anyone of a certain age knows, the secret to happiness is low expectations. And investors certainly don’t want to find out any bad news as an aside on the conference call.
Therefore, no CEO actually wants their stock to move during the call and (Michael O’Leary excepted) are coached rigorously to be as bland as possible. Add a few “at the end of the day”, take away the management bullshit and excise vague chatter about “economic uncertainty” and most of them sound like footballers after a boring 0-0.
Worse, these days earnings calls aren’t just listened to by a bunch of analysts, investors and the occasional journalist. In fact, the humans are vastly outnumbered by a horde of trading algorithms that will buy or sell your stock based solely on things like your overuse of the word “but”.
Here is a chart showing the explosion of machine readings of US filings in 2003-16, accounting for 78 per cent of all downloads that final year. Since then it has likely gone parabolic.
If you thought that we were exaggerating by saying the word “but” can be a trigger for a stock market wobble, that’s actually from Luke Ellis, the exiting CEO of Man Group, one of the world’s biggest quant funds. From a mainFT story a few years ago:
“There’s always been a game of cat and mouse, in CEOs trying to be clever in their choice of words,” Mr Ellis says. “But the machines can pick up a verbal tick that a human might not even realise is a thing.”
Alphaville gathers that many companies are already using language-AI systems to judge how trading algorithms might respond to their prepared remarks (and prepared answers to obvious questions), and adjusting accordingly.
As a result, usage of certain trigger words identified as being negative in a popular AI language training data set have fallen sharply, as this paper detailed. Some are even tweaking their tone to avoid triggering the algos:
Managers of firms with higher expected machine readership exhibit more positivity and excitement in their vocal tones, justifying the anecdotal evidence that managers increasingly seek professional coaching to improve their vocal performances along the quantifiable metrics.
Given the findings of the Executives vs. Chatbots paper, perhaps it is time to go further?
Levine suggested that ChatGPT should perhaps do earnings call if results are bad. We’d argue that perhaps ChatGPT — or your LLM of choice — should do all earnings calls. Except O’Leary’s, obviously.
Read the full article here