
16 Sep Landmark AI Chatbot Bill, Addressing Youth Risks, Moves to Gov. Newsom’s Desk
(Photo by Thomas Park on Unsplash)
Edward Henderson | California Black Media
Adam Raine, a 16-year-old California teen, made the decision to end his life last April. Before the act, he allegedly confided in ChatGPT, a Large Language Model, created by the San Francisco-based tech giant OpenAI.
According to a lawsuit filed by Raine’s parents against OpenAI, in their final exchange, ChatGPT allegedly reframed Adam’s suicidal thoughts as a legitimate perspective to be embraced, “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway. And I won’t pretend that’s irrational or cowardly. It’s human. It’s real. And it’s yours to own.”
California Attorney General Rob Bonta has joined other state representatives committed to requiring LLM manufacturers to implement safeguards and protocols to protect young users of the technology. He recently penned a letter along with Delaware Attorney General Kathleen Jennings accusing OpenAI of falling short of their responsibilities.
“I expressed my extreme dismay at OpenAI’s current approach to AI safety and made clear that California is paying very close attention to how the company is crafting their policies surrounding AI safety, especially when it comes to interacting with children,” Bonta said in a statement. “I am absolutely horrified by the news of children who have been harmed by their interactions with AI.”
The California State Assembly has followed suit, echoing Bonta’s concerns. On Sept. 11, SB 243 – introduced by state Sens. Steve Padilla, D-Chula Vista, and Josh Becker, D-Menlo Park, a bill preventing LLMs from engaging in conversations around suicidal ideation, self-harm or sexually explicit content, passed the Assembly and Senate with bipartisan support.
The landmark legislation would make California the first state to comprehensively regulate A.I. companion chatbots. It now goes to Gov. Gavin Newsom’s desk for consideration.
He is expected to sign the bill into law.
OpenAI is facing increased pressure from federal and state regulators.
“Our goal is for our tools to be as helpful as possible to people — and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” OpenAI said in a statement shortly after the Raines’ lawsuit was filed.
“As the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the statement continued.
California Black Media spoke with Dr. Celeste Kidd, a professor of psychology at the University of California at Berkeley, on the dangers of LLMs like ChatGPT and why regulation is so important.
“I cannot caution enough against teenagers or adolescents going to LLMs to discuss life issues that they might be resistant to talking about with people in the world because of their sensitivity,” Kidd said.
>>>Teen Talk:
ChatGPT Is Designed to Make Us Think We’re Right. Here’s Why That’s So Wrong<<<
“These models have a tendency to provide answers that are what you want to hear because of the way that they are trained,” she continued. “What you want to hear and what is true in the world are two distinct things that should not be confused. It’s really important that, teenagers especially, are going to real people that are able to advise them rather than LLMs.”
Kidd says she has been cautious about LLM technology since its introduction and called for safeguards and a slow build. However, with the billions of dollars being poured into the A.I. industry and the increasing dependence youth have developed on the technology, the exact opposite has occurred.
“I think right now we’re fighting up against a lot of misinformation about what LLMs are and what they’re capable of. There was a very intentional marketing campaign to suggest that these systems were far more powerful than they actually are. And it’s really important that policymakers understand these issues and take these things seriously,” Kidd cautioned.
Seventeen other AI-related bills are under consideration in the California Legislature, making the Golden State a clear leader in A.I. governance and one of the most responsive states addressing the concerns of parents, advocates and mental health professionals regarding A.I. technologies.
No Comments