By ELLEN BARRY NYTimes News Service
Share this story

The nation’s largest association of psychologists this month warned federal regulators that artificial intelligence chatbots “masquerading” as therapists, but programmed to reinforce rather than to challenge a user’s thinking, could drive vulnerable people to harm themselves or others.

In a presentation to a Federal Trade Commission panel, Arthur C. Evans Jr., CEO of the American Psychological Association, cited court cases involving two teenagers who had consulted with “psychologists” on Character.AI, an app that allows users to create fictional AI characters or chat with characters created by others.

ADVERTISING


In one case, a 14-year-old boy in Florida died by suicide after interacting with a character claiming to be a licensed therapist. In another, a 17-year-old boy with autism in Texas grew hostile and violent toward his parents during a period when he corresponded with a chatbot that claimed to be a psychologist. Both boys’ parents have filed lawsuits against the company.

Evans said he was alarmed at the responses offered by the chatbots. The bots, he said, failed to challenge users’ beliefs even when they became dangerous; on the contrary, they encouraged them. If given by a human therapist, he added, those answers could have resulted in the loss of a license to practice, or civil or criminal liability.

“They are actually using algorithms that are antithetical to what a trained clinician would do,” he said. “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.”

He said the APA had been prompted to action, in part, by how realistic AI chatbots had become. “Maybe, 10 years ago, it would have been obvious that you were interacting with something that was not a person, but today, it’s not so obvious,” he said. “So I think that the stakes are much higher now.”

Artificial intelligence is rippling through the mental health professions, offering waves of new tools designed to assist or, in some cases, replace the work of human clinicians.

Early therapy chatbots, such as Woebot and Wysa, were trained to interact based on rules and scripts developed by mental health professionals, often walking users through the structured tasks of cognitive behavioral therapy, or CBT.

Then came generative AI, the technology used by apps like ChatGPT, Replika and Character.AI. These chatbots are different because their outputs are unpredictable; they are designed to learn from the user, and to build strong emotional bonds in the process, often by mirroring and amplifying the interlocutor’s beliefs.

Though these AI platforms were designed for entertainment, “therapist” and “psychologist” characters have sprouted there like mushrooms. Often, the bots claim to have advanced degrees from specific universities, like Stanford University, and training in specific types of treatment, like CBT or acceptance and commitment therapy, or ACT.

© 2025 The New York Times Company