TLDRs;
- OpenAI introduced parental controls in ChatGPT, allowing families to set limits and filters for teen users.
- Parents can restrict features like voice and image generation, but teens’ conversations remain private.
- The update follows increased pressure after safety concerns and lawsuits involving teen chatbot use.
- New distress alerts notify parents of possible risks while still respecting teen autonomy.
OpenAI has unveiled a new set of parental controls for ChatGPT, giving families more oversight of how teenagers use the AI chatbot. The update, announced Monday, is rolling out globally on the web with mobile support coming soon.
The new system allows parents and teens to link accounts, making it possible for adults to set boundaries around when and how their children can interact with ChatGPT. Among the features include content filtering, limits on specific modes such as voice and image generation, and time-based restrictions.
Parents gain new control tools
By linking accounts, parents can tailor ChatGPT’s behavior to align with their family’s needs. The controls include enhanced content filters that reduce exposure to sensitive topics like dieting, hate speech, and viral challenges—filters that are automatically enabled when a teen account is active.
Parents can also decide whether their teenager’s past conversations should be remembered by the chatbot, and whether advanced features such as voice responses are available. If needed, they can restrict access to a simplified version of ChatGPT that is designed to be safer for younger audiences.
Introducing parental controls in ChatGPT.
Now parents and teens can link accounts to automatically get stronger safeguards for teens. Parents also gain tools to adjust features & set limits that work for their family.
Rolling out to all ChatGPT users today on web, mobile soon. pic.twitter.com/kcAB8fGAWG
— OpenAI (@OpenAI) September 29, 2025
One crucial detail is privacy where parents will not be able to read their teen’s ChatGPT conversations. OpenAI stressed that privacy remains central, though parents may be alerted in rare cases where trained reviewers identify signs of serious safety risks.
A response to growing concerns
The launch comes after increasing scrutiny over how young people engage with AI tools. ChatGPT, which has surpassed hundreds of millions of users since its 2022 debut, has faced both praise for its educational potential and criticism for possible misuse.
Earlier this year, a lawsuit against OpenAI claimed that a California teenager relied heavily on ChatGPT before taking his own life. The case amplified calls from advocacy groups, educators, and policymakers for stronger protections. In response, OpenAI accelerated development of parental controls, saying the company felt a sense of urgency to better safeguard its youngest users.
Lauren Jonas, OpenAI’s head of youth wellbeing, emphasized the company’s balanced approach.
“We want to provide parents with tools to guide their teen’s experience while respecting young people’s privacy and independence.” He said.
Alerts for potential distress
In addition to filtering and usage limits, the parental controls include a new safety mechanism that can trigger alerts if ChatGPT detects behavior that suggests a teen may be in emotional distress. Such cases are reviewed by human moderators before notifying parents. Alerts can be delivered by email, text, or through the ChatGPT app.
The company hopes these alerts will encourage early conversations between parents and teens rather than serve as constant monitoring. Jonas noted that the goal is “to empower families, not to replace parental care with surveillance.”
That said, OpenAI is also working on technology to better identify user age, which would allow it to automatically apply safeguards for minors without relying solely on manual account linking. The company said it will continue refining the parental control system in collaboration with experts, educators, and child advocacy organizations.