OpenAI launches expert council on mental well-being

by Alan North
0 comments


OpenAI has formed an advisory council to monitor user well-being and AI safety, the company announced this week. The eight-person group will be tasked with defining standards for healthy AI interactions across age groups.

The announcement came alongside an X post by CEO Sam Altman stating that the company has been able to successfully mitigate the “serious mental health issues” posed by the use of its products — Altman then went on to explain that ChatGPT would begin to allow more adult content, including erotica, in chats. OpenAI is currently facing its first wrongful death lawsuit, following allegations that ChatGPT played a role in the death by suicide of a young teen.

Council members include academics from Boston Children’s Hospital’s Digital Wellness Lab and Stanford’s Digital Mental Health Clinic, as well as experts in psychology, psychiatry, and human-computer interaction.

Mashable Light Speed

“We remain responsible for the decisions we make, but we’ll continue learning from this council, the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being,” the company wrote in a blog post.

Last week, YouGov published a survey of 1,500 Americans that found just 11 percent were open to using AI to improve their mental health. Only 8 percent of respondents said they trusted the technology to be used in this space.

Broadly, generative AI companions have raised serious concerns among mental health experts, including the rise of what has been coined AI psychosis among chronic users of chatbot companions. AI companies have continued to launch mental health products, as more and more Americans turn to AI to answer mental health questions and receive support from digital stand-ins — despite a dearth of evidence that could prove its efficacy.

Federal regulators are investigating the role of generative AI and chatbot companions in the growing mental health crisis, as well, especially among teens. Several states have banned AI-powered chatbots advertised as therapeutic assistants. In the last month, California Governor Gavin Newsom signed a series of bills that attempt to regulate AI and its societal impacts, including mandating safety reporting for AI companies and protocols that protect teen users from exposure to sexual content. The latter law, SB 243, also requires companies institute a system for addressing suicidal ideation, suicide, and self-harm.





Source link

Related Posts

Leave a Comment