Tech

AI Firm to Implement 3 Key Parental Controls for ChatGPT

Published

on

Following a lawsuit from American parents who allege their son was encouraged to commit suicide by the AI chatbot, OpenAI is introducing new parental controls for ChatGPT. The US-based company made the announcement on Tuesday, September 2, stating that the measures would be rolled out within the next month.

Parental Oversight and Safety Features

The new system will allow parents to link their accounts with their teenagers’ and set “model behaviour rules” to control how ChatGPT responds.

In a blog post, OpenAI also said that parents would be alerted if the platform detected “acute distress” in their child’s conversations and would have control over their account settings.

This latest move comes after a previous blog post in late August, where the company first mentioned its plans for parental controls.

The day before that announcement, the parents of a 16-year-old from California who died by suicide filed a lawsuit against OpenAI.

They accused the chatbot of providing their son with detailed instructions on how to end his life and encouraging the act.

Skepticism and Future Measures

Melodi Dincer, a lawyer representing the parents, reacted to Tuesday’s announcement, calling it “the bare minimum.”

Speaking to the AFP news agency, she expressed skepticism about OpenAI’s ability to implement the changes and argued that such measures should have been in place already.

OpenAI stated in its blog post that it is “continuing to improve how our models recognise and respond to signs of mental and emotional distress.”

The company also indicated that it would be taking further steps within the next 120 days.

These include redirecting some “sensitive conversations” to more advanced “reasoning models” like GPT-5-thinking, which it says are more consistent in applying safety guidelines.

Source: Defi Media

Spread the News

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version