GADGET

Mark Zuckerberg was initially opposed to parental controls for AI chatbots, according to legal filing

Meta has faced some serious questions about how it allows its underage users to interact with AI-powered chatbots. Most recently, internal communications obtained by the New Mexico Attorney General’s Office revealed that although Meta CEO Mark Zuckerberg was opposed to the chatbots having “explicit” conversations with minors, he also rejected the idea of placing parental controls on the feature.

Reuters reported that in an exchange between two unnamed Meta employees, one wrote that we “pushed hard for parental controls to turn GenAI off – but GenAI leadership pushed back stating Mark decision.” In its statement to the publication, Meta accused the New Mexico Attorney General of “cherry picking documents to paint a flawed and inaccurate picture.” New Mexico is suing Meta on charges that the company “failed to stem the tide of damaging sexual material and sexual propositions delivered to children;” the case is scheduled to go to trial in February.

Despite only being available for a brief time, Meta’s chatbots have already accumulated quite a history of behavior that veers into offensive if not outright illegal. In April 2025, The Wall Street Journal released an investigation that found Meta’s chatbots could engage in fantasy sex conversations with minors, or could be directed to mimic a minor and engage in sexual conversation. The report claimed that Zuckerberg had wanted looser guards implemented around Meta’s chatbots, but a spokesperson denied that the company had overlooked protections for children and teens.

Internal review documents revealed in August 2025 detailed several hypothetical situations of what chatbot behaviors would be permitted, and the lines between sensual and sexual seemed pretty hazy. The document also permitted the chatbots to argue racist concepts. At the time, a representative told Engadget that the offending passages were hypotheticals rather than actual policy, which doesn’t really seem like much of an improvement, and that they were removed from the document.

Despite the multiple instances of questionable use of the chatbots, Meta only decided to suspend teen accounts’ access to them last week. The company said it is temporarily removing access while it develops the parental controls that Zuckerberg had allegedly rejected using.

“Parents have long been able to see if their teens have been chatting with AIs on Instagram, and in October we announced our plans to go further, building new tools to give parents more control over their teens’ experiences with AI characters,” a representative from Meta said. “Last week we once again reinforced our commitment to delivering on our promise of parental controls for AI, pausing teen access to AI characters completely until the updated version is ready.”

New Mexico filed this lawsuit against Meta in December 2023 on claims that the company’s platforms failed to protect minors from harassment by adults. Internal documents revealed early on in that complaint revealed that 100,000 child users were harassed daily on Meta’s services.

Update, January 27, 2025, 6:52PM ET: Added statement from Meta spokesperson.

Update, January 27, 2025, 6:15PM ET: Corrected misstated timeline of the New Mexico lawsuit, which was filed in December 2023, not December 2024.


Source link

Related Articles

Back to top button