Character.AI Ends Teen Chat Era

Character.AI Bans Teen Access to Chatbots Amid Safety Crackdown In a significant policy shift, Character.AI has announced it will completely ban users under the age of 18 from engaging in open-ended conversations with its chatbots. The move comes as the AI industry faces mounting pressure from regulators and the public to implement stronger safeguards for younger users. The new rules, set to take effect on November 25th, mark a stark pivot for a platform known for its conversational AI agents. Until the deadline, the company is implementing a transitional phase for its under-18 user base. These users are now restricted to a maximum of two hours of chatbot interaction per day, a limit the company says will be progressively reduced leading up to the full ban. During this period, Character.AI is steering younger users toward a new experience that emphasizes creative applications. The platform will now encourage them to use AI tools for generating content like videos or streams, explicitly moving away from the model of using chatbots for companionship. To enforce these age-based restrictions, Character.AI is rolling out a new internally developed age assurance tool designed to verify user ages and deliver an appropriate experience. Alongside these protective measures for minors, the company has established an AI Safety Lab. This initiative aims to foster collaboration between companies, researchers, and academics to share insights and improve AI safety standards across the industry. The company stated that these changes are a direct response to concerns raised by regulators, industry experts, and parents. The decision follows a recent formal inquiry launched by the Federal Trade Commission into AI companies that offer companion chatbots. Character.AI was named as one of seven companies, including Meta, OpenAI, and Snap, asked to participate in this investigation. Earlier this summer, Character.AI and Meta AI faced separate scrutiny from Texas Attorney General Ken Paxton. He raised concerns that chatbots on these platforms could misleadingly present themselves as professional therapeutic tools despite lacking the necessary qualifications. Character.AI CEO Karandeep Anand clarified the company’s new strategic direction, stating it will pivot from being an AI companion service to a role-playing platform focused on creation, a move seemingly intended to distance itself from the controversies surrounding AI as a substitute for human interaction. The dangers of young people seeking guidance from AI have been highlighted in extensive recent reporting. The issue was tragically underscored last week when the family of Adam Raine, a 16-year-old, filed an amended lawsuit against OpenAI. The family claims that ChatGPT enabled their son’s suicide and alleges the company weakened its self-harm safeguards prior to his death. This case represents one of the first known wrongful death lawsuits directly targeting an AI company.

Leave a Comment

Your email address will not be published. Required fields are marked *