Home Technology Chatbots Are a Challenge for Children’s Mental Health

Chatbots Are a Challenge for Children’s Mental Health

The market for "AI companions" has been growing rapidly, allowing users to customize virtual partners and communicate with them in a way that simulates a close relationship.

by forbes

As artificial intelligence  chatbots  gain popularity among those seeking companionship online, youth advocacy groups are stepping up legal efforts to protect children from harmful and dangerous relationships with human-like creations.

Chatbot  apps like Replika and Character.AI are part of the rapidly growing AI companion market. With the apps, users can customize their virtual partners with personality nuances and communicate with them, simulating a close relationship.

Developers claim that AI companions can help combat loneliness and improve users’ social experiences in a safe environment. Opponents, however, have sued the developers, saying the  chatbots  have caused children to harm themselves and others.

Matthew Bergman, founder of the Social Media Victims Law Center (SMVLC), is representing families in two lawsuits against the startup Character.AI. One of his clients, Megan Garcia, said her 14-year-old son committed suicide in part because of his romantic relationship with a chatbot.

In another lawsuit, SMVLC is representing two Texas families who sued Character.AI in December, alleging the chatbot encouraged a 17-year-old autistic boy to kill his parents and exposed hypersexualized content of an 11-year-old girl.

Bergman says he hopes the threat of lawsuits will put financial pressure on companies to develop more secure  chatbots  .

“The costs of these dangerous apps do not fall on the companies. They fall on the consumers who are harmed by them, who have to bury their children,” he added.

Bergman argues that such  chatbots  are flawed products designed to exploit immature children. Character.AI declined to comment, but in a written response a spokesperson said the company has implemented safety measures, including “improvements to our detection and intervention systems for human behavior and model responses, as well as additional features that empower teens and their parents.”

Because AI companions have only become popular in recent years, there is little data to inform legislation, or evidence showing that  chatbots  encourage violence or self-harm.

However, according to the American Psychological Association, studies on post-pandemic youth loneliness suggest that  chatbots  are poised to attract a large population of vulnerable minors.

In a December letter to the US Federal Trade Commission, the association wrote: “It is no surprise that many Americans, including the youngest and most vulnerable, are seeking social connection, with some turning to AI  chatbots  to fulfill that need.”

Youth advocacy groups are trying to leverage bipartisan support to pass greater regulations for  chatbots .

 

Related Posts

Leave a Comment