OpenAI has announced a significant update aimed at improving safety for teenage users of its AI chatbot, ChatGPT. In a blog post published Tuesday, CEO Sam Altman unveiled a new **age-prediction system** along with enhanced **parental control features** designed to better protect minors amid increasing scrutiny over the impact of AI on young people.
The newly developed age-prediction technology estimates users’ ages based on their interactions with ChatGPT. If the system is uncertain about a user's age, it defaults their experience to one designed specifically for users **under 18 years old**, enforcing stricter content safeguards and moderation. In some regions, OpenAI may also request an ID to verify age — a measure that balances privacy concerns with the need for more robust protection for minors.
The dedicated under-18 ChatGPT experience introduces limitations that block graphic and sexual content and restrict engagement with topics that could be harmful or triggering, such as discussions involving self-harm or suicidal ideation. If the AI detects serious distress or suicidal thoughts in teenage users, it is programmed to take proactive safety steps, including notifying parents or, if necessary, contacting law enforcement to help prevent imminent harm. This represents a deliberate shift in OpenAI’s approach, prioritizing **safety over privacy and freedom** for minors, as Altman emphasized: "This is a new and powerful technology, and we believe minors need significant protection".
Complementing these protections, OpenAI is rolling out a suite of parental controls designed to give caregivers greater oversight and involvement. Parents will be able to link their ChatGPT accounts with their child’s, allowing them to set "blackout hours" to control when the chatbot can be accessed, manage feature availability, and receive notifications if their teen exhibits signs of emotional distress. These tools aim to foster a safer online environment without completely sacrificing user autonomy.
The move comes amid broader regulatory attention, including an ongoing Federal Trade Commission inquiry into how AI chatbots like ChatGPT affect children and teenagers. As AI’s role in everyday communication grows rapidly, stakeholders and experts have repeatedly raised concerns about unmoderated access by minors to powerful language models that might expose them to inappropriate content or emotional risks.
OpenAI’s efforts reflect a wider industry trend toward implementing **age-specific AI experiences** and developing safeguards that reconcile the tension between user safety, freedom of expression, and privacy protections. The company acknowledges these are complex choices and welcomes ongoing dialogue with the public and experts to refine its approach.
In addition to teen safety features, OpenAI continues to enhance ChatGPT’s ability to detect emotional distress and guide sensitive conversations appropriately. Its latest updates include routing critical moments toward specialized reasoning models and enabling rapid connection to emergency services or trusted contacts for users in crisis.
The parental control system and age-prediction functionalities are expected to be available by the end of the month, marking a major step forward in AI safety governance as technologies become more deeply entwined with vulnerable populations' daily lives.
By proactively addressing these challenges, OpenAI aims to set a precedent for responsible AI usage, ensuring that while AI’s benefits remain accessible, the unique needs and protections for teenagers are respected and enforced through thoughtful design and transparent policy.
Continue Reading
This is a summary. Read the full story on the original publication.
Read Full Article