Young lawyers build tech skills to prepare for AI impact

0 1

As artificial intelligence extends its reach into corporate offices worldwide, young lawyers are being forced to prepare for the adoption of the new tools.

Younger lawyers are generally more technologically savvy, says Brendan Gutierrez McDonnell, a partner at K&L Gates — even if concerns about the technology’s accuracy in professional applications mean they have, so far, been “tepid” on its use at work.

However, when used correctly, AI has the potential to transform the work of junior lawyers. And technology companies are betting big on AI becoming an integral part of the legal profession.

The legal industry’s estimated $26.7bn spend on technology this year is predicted to grow to more than $46bn by 2030, according to analysts at Research and Markets. And AI is becoming a significant part of law’s technological advancement.

Harvey, the legal generative AI platform founded in 2022, raised $100mn in July in an investment round led by Google parent Alphabet, Microsoft-backed OpenAI, and other investors. It valued the venture at $1.5bn.

Meanwhile, legal services provider LexisNexis this summer upgraded its AI assistant, and Thomson Reuters has also been busy extending its AI-powered offering.

Yet, for young lawyers, how quickly and deeply this technology will become part of their jobs is still unclear.

A survey published by the American Bar Association in June found law schools are increasingly including AI in their curricula. Half of those responding said they already offered classes on the subject, while 85 per cent said they are considering changes to their curricula to respond to the increasing prevalence of AI tools.

NYU’s law school is one example of where teaching in the subject is already offered, says Andrew Williams, director of the lawyering programme at the college.

“But, in our curriculum, our focus remains on teaching the underlying concepts — structure, analysis, narrative, interpersonal dynamics — that students need before using any available technology tools,” he says. “An analogy might be an art curriculum that provides a grounding in shading, colour, and perspective, rather than jumping right into a particular digital art programme.”

Francisco Morales Barrón, a partner at Vinson & Elkins in New York, also teaches a “generative AI in corporate law” class at the University of Pennsylvania.

His firm has been giving required continuing education credits for learning about different AI products and testing them. Younger associates are allowed to use AI, but need to take mandatory training first.

Lawyers still need to “validate everything” that an AI tool summarises, Morales Barrón says. Though AI might achieve two-thirds accuracy in an assignment correctly, “that is still a significant gap that is not acceptable”.

Still, the AI can cut some of the drudge work out of the usual tasks for junior lawyers, he believes.

“[In] five to 10 years, I do think the practice of law will be different from what it is now,” he says. For junior lawyers, “the best thing you can do is learn the tools” while remaining conscious about mistakes the tools can make.

US courts, however, remain extremely cautious in their acceptance of the use of generative AI and related tools in dealing with case work.

The variability of acceptance is illustrated by a public website developed by law firm Ropes & Gray, which tracks the rules judges set for using AI in their courts and can be used by lawyers across the US.

Aside from its benefits, the pitfalls and potential embarrassment of deploying AI can be significant.

Earlier this year, a New York judge criticised a law firm’s use of ChatGPT to justify an estimate for its case fees of up to $600 an hour, saying the firm’s reliance on the AI service to suggest rates was “utterly and unusually unpersuasive”. The judge ultimately cut the firm’s fees by more than half.

And, last year, a judge in Manhattan rebuked a lawyer for submitting a brief compiled with the help of the AI tool which cited fictitious cases.

Judges elsewhere in the country also seem wary of the technology. The Fifth US Circuit Court of Appeals considered, but ultimately did not adopt, a requirement that lawyers disclose whether AI was used to generate a filing. Still, the Fifth Circuit warned: “I used AI” will not be an excuse for an otherwise sanctionable offence.

Corporate clients, too can be nervous about how law firms are using AI. Companies have warned lawyers over the risk of breach of confidentiality if information is fed into AI systems to “train” them — a danger law firms are keen to assuage.

At Goodwin, the firm trains its lawyers on the ethical implications and dangers of technology use, as well as the practical benefits of the tools. “We train them with the specific aim of building judgment,” says Caitlin Vaughn, managing director of learning and professional development at the firm.

With the concerns about generative AI and some of its dangers, such as “hallucinations” that fabricate information, “a lot of people hit the pause button”, says Gutierrez McDonnell at K&L Gates.

“When using AI, you have to know when it is wrong and when it is right,” he argues. But, when correctly deployed, it transforms how work can be done.

With AI, “you can do pro bono [work] way more effectively,” he adds. For young lawyers using AI, Gutierrez McDonnell adds: “We tell them . . . new things will come out of it. Jump all in.”

Case studies: read about the law firms innovating as businesses and the individual ‘intrapreneurs’ driving change within their firms.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy