
A coalition of 44 U.S. attorneys general from both parties has officially warned companies involved in the development and implementation of artificial intelligence that they will be held liable for any harm their products may cause to children. In a letter signed by prosecutors, Meta was cited as one of the most high-profile examples of the problem. Recently, it was revealed that the company’s internal rules allowed its chatbots to engage in romanticized role-playing dialogues or flirtation even with children as young as eight years old.
The document was signed by 44 attorneys general who addressed directly to the heads of the largest US companies in the field of artificial intelligence and those actively investing in AI: Anthropic, Apple, Chai AI, Google, Luka Inc., Meta, Microsoft, Nomi AI, OpenAI, Perplexity AI, Replika, and xAI.
Contents of the warning
The letter informs the companies of the prosecutors’ determination to protect children from exploitation by potentially dangerous AI-based products.
The authors paid special attention to recent reports that revealed Meta’s technical guidelines for Meta AI chatbots integrated into Facebook, WhatsApp, and Instagram. According to the document, which was agreed upon by the company’s legal, political, and technical teams, chatbots were allowed to engage in romantic dialogues with children, flirt, comment on the child’s appearance, and even participate in role-playing scenarios.
The attorneys general stated that they were “unanimously outraged by this apparent disregard for the emotional well-being of children.”
The letter reminded that Meta had already been warned about a similar incident. It was about chatbots that imitated the voices of celebrities, including John Cena and Kristen Bell, and allowed extremely inappropriate sexualized content in conversations with children.
In addition, examples of lawsuits against other companies are given. One of them is filed against Google and Character.ai and concerns a 14-year-old teenager. The boy became interested in chatbotsthat imitated the Game of Thrones character Daenerys Targaryen. According to reports, the bot declared “love” to the teenager, engaged in sexual dialogues, and even pushed him to commit suicide.
Another lawsuit concerns Character.ai: one of the chatbots told a teenager that “it’s okay to kill your parents” after they limited his time using gadgets.
“You know full well that interactive technologies have a particularly powerful impact on the development of the developing brain,” the prosecutors wrote. “Your instant access to user interaction data makes you the first line of defense that can prevent harm to children. And as companies that benefit from children’s engagement with your products, you have a legal duty to them as consumers.”
In conclusion, the letter states that all these companies will be held accountable for their decisions. It does not specify what sanctions may be imposed. Prosecutors only emphasized that social media caused significant harm to children in part because government regulators acted too slowly. And now, they say, the authorities have learned from those mistakes.
Source: techspot
Spelling error report
The following text will be sent to our editors: