Welcome back to The AI Shift, our weekly exploration of how AI is changing the labour market. This week: the legal sector, where we are faced with a familiar puzzle. Almost three years ago OpenAI’s GPT-4 comfortably passed the US Law School Admission Test, but since then data from Lightcast and Glassdoor suggest that job postings and salaries for junior lawyers have held up fairly well. What should we make of this? To answer this question, and many more, we sat down with the FT’s brilliant and deeply-sourced legal correspondent, Suzi Ring. Here’s a lightly-edited transcript of the conversation.
Sarah: Junior lawyers are often described as the archetypal example of the type of job that AI is going to displace, but we’re reading from your reporting that there’s a huge pay battle when it comes to newly qualified lawyers. I wrote down some of the figures for the UK: at Eversheds Sutherland they’re paying £110,000 a year now to a newly qualified lawyer; £105,000 at Pinsent Masons; £140,000 at Ashurst — and the US firms, even higher. An economist would say: you don’t normally see the price of something going up, when demand for it is going down. Can you help us disentangle what’s going on?
Suzi: You’ve hit the nail on the head. There’s a sense in which trainees and junior lawyers do a certain amount of grunt work, which surely must be able to be automated by AI. Now we’re seeing the start of some of that, but really what we’re seeing is . . . not a significant enough change to actually eliminate roles. So one reason we know that is we have not seen any change to the number of trainees that firms are recruiting. It’s not like they’re thinking: now we’ve got these AI tools, we only need half the intake every year. That has not happened.
But what we are hearing from junior associates is that they will historically have been doing a lot of document review work, and be the ones having to stay up until the middle of the night on a huge trial, say if they’re a litigation associate, searching through documents. A lot of that can be automated, but it does still need human checking to make sure errors haven’t been introduced. But that just means actually those associates can be used more efficiently on other things — it doesn’t mean they’re sitting there twiddling their thumbs. It’s upscaling the type of work they’re able to do.
John: What are some of the things they’re being “upscaled” to?
Suzi: For a junior associate, you really want to be interacting with clients and just doing more sophisticated legal work. The value lawyers really provide is where you’ve got a knotty legal problem and you’re trying to get out of it or you’re trying to find a way round it, or you’re trying to buy a company and there’s something stopping you. So it just frees up lawyers generally to do more of the valuable stuff.
John: So the junior associates are getting their midnights back? Or they’re just being given more interesting stuff to do at midnight?
Suzi: [laughs] Just more interesting stuff to do at this point, from what I hear.
Sarah: Are the junior lawyers you speak to positive about AI?
Suzi: I would say positive and cynical, if it’s possible to be both. There’s a real spectrum of how much people are using it, and that also differs by practice area. If you’re in litigation you’re probably using it more, because of the sheer amount of document review. But I think it’s not fundamentally reshaping their day-to-day jobs at the moment. It’s more of an internal pressure from firms to be using it — it’s more of a work metric. Some firms have created inventive schemes where you’ll get a certain payment to show you’ve been using it and educating yourself on it.
When I’m speaking to associates, a lot of them say that while it is useful around the edges for research or document review, there’s still a sufficient amount of error that it is not reliable at all. There were a couple of judgments in the courts this year where judges rebuked barristers, and there were two cases where multiple citations were not real cases, and showed a level of care that hadn’t been taken.
Sarah: You wrote a story recently about Clifford Chance announcing lay-offs, but in back-office roles. Is that another element of AI’s impact?
Suzi: There are definitely going to be more efficiencies to be found in the back-office. However, the interesting thing about that story was that my sources felt quite strongly that [AI] wasn’t a legitimate reason as to why that shift was happening. Now obviously I’m sure there’s multiple perspectives on that, but they didn’t feel that AI was sufficiently developed enough to be replacing jobs, but was maybe being used slightly as an excuse to just effectively outsource back-office roles to cheaper jurisdictions, which we have seen a wave of from law firms.
John: Do you get the sense, though, that when a company says they’re planning to make X change because of AI, they’re not just saying it because they want to cut costs and need an excuse. They do want to cut costs, they do need an excuse, but they also think AI is going to be able to do this stuff?
Suzi: I think you’re exactly right. I think it is being used at the moment prematurely because they want to cut costs, but with the anticipation that will be borne out in reality as AI becomes more sophisticated.
Sarah: On the topic of expectations, are law firms getting pressure from clients when it comes to their AI use?
Suzi: This is the other really interesting tension because they are getting quite a lot of pressure from clients. Obviously clients want to see bills go down. They don’t want to be paying a junior associate £500 an hour to be doing what is less skilled work. In-house general counsels are under pressure from CEOs to show cost savings for the legal department, they are going to their external law firms saying we need savings, but sometimes the reality is it’s just not good enough yet to create those savings.
Somebody said to me this week that they’d had a CFO call them and say “I think you can do that with AI” — it was to do with securitisation — and they were like, “look, it’s just not sophisticated enough, we can’t do that with AI”. So there definitely is pressure, but the law firm is going to have to show some kind of cost-saving or show they’ve taken that into account in some way, or the clients are going to be unhappy.
There’s a real disincentive in some ways for law firms to adopt AI because they still bill by the hour. However, they are under such pressure from clients I don’t think that’s going to win out long-term. They know they do have to, and law firms are investing an incredible amount into AI. But that’s a lot of upfront costs that they’re having to pay, so they want to see the return for that, so they can’t immediately cut costs for clients unless they want to see a huge dip in their profitability, which I’m sure a lot of clients would say they should take [laughs].
John: It overlaps with medicine, where there’s clearly not an incentive for the medical profession to start doing a load of stuff by AI even if they could, because it would mean everyone makes less money. They’re both professions where I would say the ability to do something more efficiently has not always translated into doing it more efficiently, even before AI.
Sarah: For the in-house legal teams, is it possible for them to use these AI tools to do work that otherwise they might have given to an external law firm?
Suzi: Yep that’s happening as well. Harvey and Legora are two of the biggest providers of AI to the industry and they are servicing both law firms and in-house.
The other end of the market where AI will definitely make a difference is there’s a certain amount of commoditised work like wills and divorces and things like that, which will benefit from AI because it’s very formulaic. There’s a lawyer who created a really cool tool this year that’s been really successful that allows people like plumbers and people who haven’t got paid, and don’t have the time to chase those debts, they can use this tool and it’s been shown to be really effective.
You can get a legal letter done for £2 — and often someone will pay you [at that point], because they feel threatened. And if it does need to go all the way through, it charges you about £50 to file a claim in the court.
Sarah: That’s interesting, because that’s not displacing lawyers’ work, is it, it’s just increasing access to justice for people who wouldn’t be able to afford it otherwise.
John: Although the pessimist in me immediately goes: “I bet somebody else is using something like this to send letters to pensioners saying ‘you’re in loads of debt.’”
Suzi: Everything always has a dark side, doesn’t it!
So what have we learnt?
Sarah
What struck me from talking to Suzi was just how much pressure there seems to be in the legal profession to be seen to be using AI, whether that’s law firms meeting the expectations of their clients, or juniors meeting the expectations of management. As for the reality, I get the sense that LLMs’ unreliability is putting a hard ceiling on how much they can really transform the profession, at least for now. What do you think, John?
John
I thought it was notable that even though LLMs can now speed up a lot of the grunt work that juniors have historically been asked to toil at well into the evening, they’re not getting their evenings back. This feels consistent with some of the other evidence we’ve seen in terms of how AI’s impact on different occupations is being shaped as much by existing professional cultures as by AI’s innate capabilities.
Next Thursday we’ll take a break as it’s Christmas Day, but we’ll be back in your inbox on January 1st with some predictions for the year ahead.
Recommended reading
-
Data centres in space?! The FT’s Anjana Ahuja is on it (Sarah)
-
Technology writer Kevin Baker has a thoughtful Substack post arguing that what people often think of as LLMs polluting particular fields is better understood as LLMs exposing the bad incentive structures that already existed in those fields (John)
Read the full article here