Emerging AI risks require vigilance from in-house legal counsel

0 0

As companies have raced to embrace artificial intelligence tools, the legal pitfalls have become one of the hottest issues for general counsels to handle. But lawyers insist that many legal maxims — from client privacy to absolute accuracy — endure in the AI era.

Until recently, lawyers showed little concern for AI. Thomson Reuters polled corporate legal departments in 2017 and found many people were unfamiliar with the software and not too interested in trying it.

With the advent of ChatGPT, launched last November, companies have incorporated AI tools into their products, putting pressure on in-house lawyers to get up to speed with how the technology works in order to manage potential problems, in areas such as intellectual property.

Visual media company Getty Images in January filed a copyright claim against Stability AI, maker of a free image-generating tool, breaking new ground in the debate about intellectual property laws in the age of AI. Getty claimed Stability AI had “unlawfully copied and processed millions of images protected by copyright”, to train its image-generating algorithms.

In response to concerns that copyrighted materials have been used to train AI models without consent or compensation, US tech giant Microsoft has promised to pay any legal costs for commercial customers that are sued for using tools or any output generated by the AI software it produces.

Legal teams are also adopting their own AI tools. Earlier this year, law firm Allen & Overy started using an AI chatbot to help its lawyers draft contracts. Lawyers at Big Four firm PwC have also started using the same software.

But AI’s dark side hit the legal community this year. In a widely lampooned example of AI’s pitfalls, a New York lawyer used ChatGPT to create a brief for a case. When the defence started asking questions, they discovered the brief included made-up citations and opinions. The lawyer, Steven Schwartz, told a judge he did not know ChatGPT might fabricate cases as part of its functions.

Headlines like these might be scaring lawyers away from using the technology. Virtually all lawyers have heard about the New York case, says Katie DeBord, a vice-president at Disco, a Texas-based company that sells AI services to lawyers.

“This story showed that ‘AI hallucinations’ are still a real challenge,” DeBord says. Organisations can mitigate that danger of tools inventing facts by ensuring the technology they are using lets them independently verify the answers it is providing.

“One of the bigger issues I’m seeing in the use of AI is frankly the failure of many organisations to actually use it,” says DeBord, adding that general counsels might be missing out on saving money.

Understanding exactly how their companies use AI has become an essential first step for general counsels, lawyers say. An important consideration is whether a company’s use of the technology is covered by insurance, says Palmina Fava, a partner at Vinson & Elkins. “Trying to obtain insurance after a problem can be much more expensive than being proactive about the risk.”

“The GC is a strategic risk officer,” she explains. But unless a GC understands the company’s strategy on AI, they “can’t develop the necessary internal processes and frameworks” to manage risks.

Then there are privacy concerns. With ChatGPT, for example, “the data you enter and the output generated by the AI solution likely will not be private,” says Yaron Dori, a partner at US law firm Covington & Burling. The AI service provider will have access to inputs “and may not be restricted from using or sharing it with others”.

If a business is in a highly-regulated sector, using the technology might need to be limited, he adds. “Relying on AI to make employment or credit decisions can be fraught and potentially implicate anti-discrimination and fair lending laws.”

As the New York ChatGPT case illustrates, the information generated by AI may also not be reliable. “Accuracy is perhaps the most difficult problem,” when it comes to corporate AI that interacts with the public, says Matt Todd, office managing partner at law firm Polsinelli in Houston, who specialises in AI. “The question becomes one of ensuring that appropriate frameworks and human supervision are in place.”

Longer term, regulations will add complexities to the legal risks. The European Union is already advancing what could be the world’s most restrictive regime for AI. On September 13, US senate majority leader Chuck Schumer held a forum with Elon Musk and Mark Zuckerberg among others, to raise questions about potential regulations.

“We need to get our committees working on different parts of legislation, Schumer said at the meeting, because “to not act is something we want to avoid”.

But US federal regulations are a long way off — if they ever materialise.

“One of my worries is that — as [has] happened with data privacy — Congress will fail to act and the states will fill the gap with their own laws, leading to a patchwork of AI requirements that will make it more difficult for businesses to know the requirements and more costly and complicated for them to comply,” Dori says.

For now, the hype around AI might not be worth its trouble, says Todd. “There is a pervasive popular belief that these tools represent some kind of techno-sorcery that will ‘change everything’ about the way we all work,” he says, adding that he recently received a draft agreement “that I strongly suspect was created by a generative AI tool and that draft was not useful at all”.

He adds: “In the grand scheme of things, this is a minor complaint but helps show the current limitations of these systems.”

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy