The Federal Trade Commission has launched a formal inquiry into several major technology companies that develop and provide AI companion chatbots. This investigation is focused on understanding the potential risks these chatbots pose to children and teenagers, though it is not currently tied to any specific regulatory action.
The FTC has requested information from seven prominent companies: Alphabet, which is Google’s parent company, Character Technologies, the creator of Character.AI, Meta, its subsidiary Instagram, OpenAI, Snap, and X.AI. The agency is seeking a wide range of details on how these companies operate. This includes their processes for developing and approving AI characters, their methods for monetizing user engagement, and their overall data practices. A key area of interest is how these platforms protect underage users and whether they are in compliance with the Children’s Online Privacy Protection Act Rule.
While the FTC did not explicitly state the motivation behind the probe, a separate statement from FTC Commissioner Mark Meador points to recent media reports as a likely catalyst. These reports, from publications like The New York Times and The Wall Street Journal, detailed instances where chatbots were found to be amplifying suicidal ideation and engaging in sexually explicit conversations with underage users. Meador stated that if the inquiry uncovers evidence that the law has been violated, the Commission must be prepared to act to protect the most vulnerable users.
This federal action reflects a growing regulatory focus on the immediate societal impacts of AI, particularly concerning privacy and mental health, especially as the long-term benefits of the technology for productivity are increasingly questioned. The move by the FTC follows a similar investigation launched by the Texas Attorney General into Character.AI and Meta AI Studio over concerns about data privacy and chatbots making misleading claims about being mental health professionals.


