OpenAI races to block kids from ‘spicy’ AI interactions with a new age prediction tool rolling out globally as of January 20, 2026.
OpenAI’s age prediction model suggests that if a user is under 18, OpenAI said, ChatGPT will automatically apply protections designed to reduce exposure to “sensitive content,” like depictions of self-harm.
The company’s announcement states.
The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age.
The model scans behavioral clues like account age, login times, usage patterns, and stated birthdate to flag under-18 users, triggering blocks on gore, self-harm, sexual roleplay, etc.
A Risky Track Record
According to a recent global survey, only 6% of education organizations have implemented AI red-teaming. The Kiteworks Data Security and Compliance Risk forecast report in 2026 that identifies this gap as one of the most troubling in the study i.e., AI systems affecting students, including minors, are deployed without adversarial testing to detect flaws before attackers or unintended behaviors cause damage.
Roblox’s similar selfie/ID checks failed spectacularly: adults like a 23-year-old got teen-labeled, while parents accidentally adult-ified kids, just weeks into 2026 rollout.
AI-generated child abuse images exploded 1,325%, which rose to over 67,000 in 2024, from 4,700 reports logged in 2023. This rise was seen in reports to the National Center for Missing and Exploited Children.
Outlook Ahead
Expect relentless circumvention attempts and lawsuits if errors persist, demanding hybrid AI-ID checks for 99% reliability like Australia’s trials. As OpenAI’s age-prediction system rolls out worldwide, the real test won’t be how many minors it blocks but whether AI can ever reliably police age without harming users, or itself.