10:01
09:39
12:39
12:22
10:57
10:54
10:01
09:39
12:39
12:22
10:57
10:54
10:01
09:39
12:39
12:22
10:57
10:54
10:01
09:39
12:39
12:22
10:57
10:54
OpenAI has updated its policies for how ChatGPT interacts with underage users.
For those under 18, the AI will not engage in flirtatious exchanges or discuss topics related to suicide. If a minor asks about suicide, the system will notify parents through parental control features.
In more serious cases—such as when a teenager expresses suicidal ideation—ChatGPT will escalate beyond parental alerts and contact law enforcement:
If an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and, if unable, will contact the authorities in case of imminent harm.
OpenAI also announced the rollout of age-prediction tools and, in some cases, ID verification:
"We have to separate users who are under 18 from those who aren’t (ChatGPT is intended for people 13 and up). We’re building an age-prediction system to estimate age based on how people use ChatGPT.
If there is doubt, we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID. We know this is a privacy compromise for adults but believe it is a worthy tradeoff."