Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Senior judges have warned the judiciary in England and Wales to restrict their use of artificial intelligence in conducting legal research and to avoid divulging information about cases to online chatbots.
Official guidance published on Tuesday for magistrates, tribunal panel members and judges highlighted the risk that AI tools would make factual errors or draw on law from foreign jurisdictions if asked to help with cases.
Sir Geoffrey Vos, the country’s second most senior judge, said AI offered “great opportunities for the justice system, but because it’s so new we need to make sure that judges at all levels understand [it properly]”.
The use of the technology by the judiciary in England and Wales has attracted little attention, in part since judges are not required to describe preparatory work they may have undertaken to produce a judgment.
The guidance made clear that judges might find AI useful for some administrative or repetitive tasks. But its use for legal research was “not recommended”, except to remind judges of material about which they were already familiar.
“Information provided by AI tools may be inaccurate, incomplete, misleading or out of date,” the guidance said, noting it was often based heavily on law in the US. “Even if it purports to represent English law, it may not do so.”
AI has started to upend the broader legal profession, with some firms using the technology to help draft contracts.
In one widely noted example of the dangers of courtroom use, a lawyer in New York was sanctioned after he admitted using ChatGPT to create a brief for a case that contained invented citations and opinions.
The guidance issued on Tuesday also warned of privacy risks. Judges were told to assume that information inputted into a public AI chatbot “should be seen as being published to all the world”.
Vos, the Master of the Rolls, said there was no suggestion that any judicial office holder had asked a chatbot about sensitive case-specific information and the guidance was issued for the avoidance of doubt.
He added that in the long term AI offered “significant opportunities in developing a better, quicker and more cost-effective digital justice system”.
AI would not be used to aid decision-making until the judiciary was “absolutely sure that the people we serve would have confidence in that approach — and we’re miles away from that”.
Lord Justice Birss, deputy head of civil justice, said it might be possible to use AI to help judges to determine provisional assessments of costs — a data-heavy and time-consuming task.
The document also said some litigants unrepresented by lawyers were relying on AI tools to guide them because they lacked professional advice.
The guidance advised judges to scrutinise submissions that may have used a chatbot, and said they should also be aware of dangers posed by “deepfake” technology.
Read the full article here