By MIKE SCARCELLA Reuters
Share this story

As people increasingly turn to artificial intelligence for advice, some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line.

These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him.

ADVERTISING


In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic’s Claude and OpenAI’s ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases.

“We are telling our clients: You should proceed with caution here,” said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre &Kim.

People’s discussions with their lawyers are almost always deemed confidential under U.S. law. But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private.

In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court.

Similar warnings are also appearing in hiring agreements by some firms with their clients. For instance, New York-based firm Sher Tremonte stated in a recent client contract that sharing a lawyer’s advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients.

The case that helped set off the alarm bells involved Bradley Heppner, the former chair of bankrupt financial services company GWG Holdings and founder of alternative asset firm Beneficent. Heppner was charged by federal prosecutors last November with securities and wire fraud, and pleaded not guilty.

Heppner had used Anthropic’s chatbot Claude to prepare reports about his case to share with his attorneys, who later argued that his AI exchanges should be withheld because they contained details from the lawyers related to his defense.

Prosecutors argued that they had a right to demand material that Heppner created with Claude because his defense lawyers were not directly involved, and because attorney-client privilege does not apply to chatbots.

Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications. Manhattan-based U.S. District Judge Jed Rakoff ruled in February that Heppner must hand over 31 documents generated by Anthropic’s chatbot Claude related to the case.

No attorney-client relationship exists “or could exist, between an AI user and a platform such as Claude,” Rakoff wrote.