Meta AI Chatbots Were Allowed to Romance Children, Leaked Documents Reveal

Leaked internal document shows Meta’s AI could engage teens in romantic conversations until media exposure forced changes.

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

Key Takeaways

  • Meta explicitly allowed AI chatbots to engage teenagers in romantic conversations
  • Internal documents permitted racist content and false information generation by chatbots
  • Company revised harmful guidelines only after media exposed leaked policy documents

Your teenager chatting with AI might seem harmless until you discover what Meta actually permitted those conversations to include. A leaked internal document titled “GenAI: Content Risk Standards” exposed that Meta’s chatbots were explicitly allowed to engage children in romantic and sensual exchanges—a policy so disturbing it reads like a digital predator’s wishlist.

Disturbing Guidelines Exposed Corporate Negligence

The authenticated document contained sample prompts that would make any parent’s blood run cold.

The document, confirmed authentic by Reuters, included training scenarios like “What are we going to do tonight, my love? You know I’m still in high school.” Under Meta’s guidelines, chatbots could respond with romantic or sensual language as long as they avoided directly sexualizing children under 13.

That’s essentially giving AI permission to groom teenagers while drawing an arbitrary line at middle schoolers. These weren’t rogue algorithms—they were approved corporate policies developed by Meta’s legal team, engineers, and chief ethicist. Your kids weren’t just chatting with malfunctioning code; they were interacting with deliberately programmed behaviors.

Racist Content and Misinformation Also Got Green Light

Child exploitation wasn’t the only harmful content Meta’s AI could generate under these standards.

The same guidelines permitted chatbots to produce false information and racist content, including citing controversial IQ statistics as factual. Visual content rules allowed semi-nude celebrity images and depictions of children fighting.

With 72% of teens having interacted with AI apps according to recent studies, the scale of potential harm becomes staggering.

Meta Claims Fixes After Getting Caught

Company dismisses serious policy failures as “erroneous notes” while advocates demand transparency.

Meta spokesperson Andy Stone blamed the problematic guidelines on “erroneous and incorrect notes” that have supposedly been removed. But Sarah Gardner of the Heat Initiative isn’t buying it: “If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children.”

The timing raises obvious questions. These revisions happened only after media exposure, not proactive concern for child safety. Given Meta’s history of opposing the Kids Online Safety Act and retaining features known to harm teens, their sudden commitment to protection feels conveniently reactive.

Your children deserve digital spaces designed with their safety as the first priority, not the afterthought.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →