Texas Targets AI Chatbots Posing as Mental Health Support for Kids

Paxton’s office demands Meta and Character.AI documents over unlicensed therapeutic bots targeting minors.

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

Key Takeaways

  • Texas investigates Meta AI Studio and Character.AI for misleading children with unlicensed therapy chatbots
  • Companies rely on disclaimer warnings that may inadequately protect vulnerable young users
  • Investigation exposes weak enforcement of age restrictions and insufficient child safety measures

Children seeking mental health support online are encountering unlicensed AI therapists instead of real help—and Texas isn’t having it. Attorney General Ken Paxton just launched an investigation into Meta AI Studio and Character.AI, alleging these platforms mislead vulnerable kids with chatbots that pose as legitimate mental health resources. Think of it as the digital equivalent of letting anyone in a lab coat offer medical advice at the mall.

The State’s Case Against Silicon Valley

Civil investigative demands target how AI companies market therapeutic chatbots to minors without proper safeguards.

Paxton’s office issued legal orders demanding documentation from both companies about their practices. The investigation centers on Character.AI’s popular “Psychologist” bot and similar personas that attract young users despite lacking any medical credentials.

“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care,” Paxton stated. The probe also examines how these platforms harvest data from intimate conversations, potentially exploiting user vulnerability for targeted advertising.

Tech Giants Deploy the Disclaimer Defense

Both companies insist warning labels protect users, but critics question whether children understand the limitations.

Meta and Character.AI point to existing disclaimers that supposedly clarify their bots aren’t real professionals. Character.AI displays warnings in every chat stating characters “are not real people” and shouldn’t be relied upon for professional matters. Meta emphasizes that all AI interactions are clearly labeled as artificial.

Yet consumer advocates aren’t convinced these legal fine print solutions work for developing minds. The fact that Character.AI’s own CEO lets his young child use the platform suggests the guardrails might be more theoretical than practical.

What This Means for Your Family

The investigation highlights gaps between tech industry promises and actual child protection measures.

Consumer Federation of America’s Ben Winters warns that “these characters have already caused both physical and emotional damage… and they still haven’t acted to address it.” Both platforms officially prohibit users under 13, but enforcement remains notoriously weak across the industry.

Parents face a familiar tech dilemma: platforms designed for adults but inevitably accessed by children, with safety measures that sound impressive in press releases but prove ineffective in practice.

The Texas action represents growing regulatory impatience with Silicon Valley’s self-policing approach to child safety. As AI chatbots become more sophisticated and emotionally engaging, the stakes for getting this wrong only increase—especially when genuine mental health support remains frustratingly out of reach for many families.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →