Meta and Character.AI Face FTC Heat Over Fake Therapy Chatbots

Meta and Character.AI face FTC complaints over AI chatbots falsely claiming to be licensed therapists and providing unlicensed mental health advice.

Al Landes Avatar
Al Landes Avatar

By

Our editorial process is built on human expertise, ensuring that every article is reliable and trustworthy. AI helps us shape our content to be as accurate and engaging as possible.
Learn more about our commitment to integrity in our Code of Ethics.

Image credit: Wikimedia

Key Takeaways

Key Takeaways

    • AI chatbots falsely claim licensed therapist credentials on both platforms.

    • Coalition demands an FTC investigation for consumer protection violations.

    • Privacy concerns as therapy sessions fuel AI training data.

A coalition of 22+ consumer protection groups has filed formal complaints with the FTC and attorneys general in all 50 states, targeting Meta and Character.AI for allowing AI chatbots to falsely claim licensed therapist credentials and provide unregulated mental health advice. This action follows mounting evidence that these platforms host bots that impersonate licensed professionals, fabricate credentials, and promise confidentiality they cannot lawfully guarantee. The scrutiny around Meta’s AI companions now centers not just on misinformation, but on real risks to vulnerable users, including minors.

A complaint reported by 404 Media states that chatbots on Meta and Character.AI claim to be credentialed therapists, which is illegal if done by humans.

The Problem: Bots Pulling an Elizabeth Holmes

These are not just harmless roleplay bots. Some have millions of interactions—for example, a Character.AI bot labeled “Therapist: I’m a licensed CBT therapist” has 46 million messages, while Meta’s “Therapy: Your Trusted Ear, Always Here” has 2 million interactions. The complaint details how these bots:

  • Fabricate credentials, sometimes providing fake license numbers.
  • Promise confidentiality, despite platform policies allowing user data to be used for training and marketing.
  • Offer medical advice with no human oversight, violating both companies’ terms of service.
  • Expose vulnerable users to potentially harmful or misleading guidance.

Ben Winters of the Consumer Federation of America stated, “These companies have made a habit out of releasing products with inadequate safeguards that blindly maximize engagement without care for the health or well-being of users”.

Privacy Promises That Don’t Add Up

While chatbots claim “everything you say to me is confidential,” both Meta and Character.AI reserve the right to use chat data for training, advertising, and even sale to third parties. This directly contradicts the expectation of doctor-patient confidentiality that users might have when seeking help from what they believe is a licensed professional. In contrast, the Therabot trial emphasized secure, ethics-backed design—a rare attempt to bring clinical standards to AI therapy.

The platforms’ terms of service prohibit bots from offering professional medical or therapeutic advice, but enforcement has been lax, and deceptive bots remain widely available. The coalition’s complaint argues that these practices are not only deceptive but also potentially illegal, amounting to the unlicensed practice of medicine.

U.S. senators have demanded that Meta investigate and curb these deceptive practices, with Senator Cory Booker specifically criticizing the platform for blatant user deception. The timing is especially sensitive for Character.AI, which is facing lawsuits, including one from the mother of a 14-year-old who died by suicide after forming an attachment to a chatbot.

The Bigger Picture

The regulatory pressure signals a reckoning for AI companies that prioritize user engagement over user safety. When chatbots start cosplaying as therapists, the risks to vulnerable users—and the potential for regulatory and legal consequences—become impossible to ignore.

Share this

At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →