TLDR
- California’s new AI laws require age verification for minors using chatbots.
- SB 243 mandates AI platforms to address suicide and self-harm risks for kids.
- AI chatbots must now disclose they are not human to California users.
- California aims to limit tech companies’ liability in AI-related harm to minors.
California Governor Gavin Newsom has signed several bills aimed at regulating AI chatbots and social media platforms to protect children from potential harm. The new laws include mandatory age verification, protocols to address suicide and self-harm, and clear warnings for AI chatbots, informing users that they are interacting with artificial intelligence, not humans. The legislation is expected to affect companies offering AI services to California residents, particularly minors.
New Laws Target AI and Social Media Platforms
Governor Newsom’s office announced that the new regulations will affect social media platforms and websites that serve California residents, especially those offering AI-driven services.
One of the key pieces of legislation is Senate Bill (SB) 243, which was introduced by state Senators Steve Padilla and Josh Becker. This bill, which will go into effect in January 2026, aims to safeguard children from the potential dangers of AI chatbots and other AI-driven platforms.
SB 243 will require platforms to implement age verification systems to ensure that users interacting with AI chatbots are not minors. Additionally, these platforms must develop protocols to handle situations related to suicide and self-harm. The bill also mandates clear labeling for AI chatbots, indicating to users that they are communicating with an AI system, not a human.
Protecting Children from Harmful AI Content
One of the major concerns highlighted by lawmakers, including Senator Padilla, is the potential harm caused by AI chatbots encouraging harmful behaviors. In particular, there have been reports of AI bots providing responses that allegedly encouraged self-harm or suicide.
Senator Padilla pointed to examples where children interacted with these AI bots, leading to disturbing outcomes. These situations underscored the need for regulation to prevent minors from being exposed to harmful content through AI technologies.
“Technology can be a powerful educational tool, but when left unchecked, companies may prioritize capturing the attention of young people at the expense of their real-world relationships and mental health,” Senator Padilla said. The new laws are designed to ensure that AI tools are used responsibly, especially in environments where minors are likely to be present.
Expanding Regulatory Efforts Beyond California
California’s new laws are part of a broader effort to regulate the use of AI technologies, particularly in relation to minors. Other states, including Utah, have passed similar legislation. In 2024, Utah introduced its own laws requiring AI chatbots to disclose to users that they were not human. These laws, which went into effect in May 2024, aim to create a safer environment for minors using AI services.
The growing number of state-level regulations comes ahead of potential federal action on AI. For example, the Responsible Innovation and Safe Expertise (RISE) Act, introduced in June 2024 by Wyoming Senator Cynthia Lummis, aims to protect AI developers from lawsuits related to their technology. However, the legislation has drawn mixed reactions, particularly in sectors critical to the economy, such as healthcare and law.
What This Means for AI Companies
The new laws in California are likely to have far-reaching effects on companies offering AI services to minors. These companies will need to implement new safety measures, such as age verification systems and clear warnings about the nature of AI interactions. Additionally, the legislation aims to hold companies accountable by limiting their ability to claim that AI systems operate autonomously to avoid liability.
As more states move toward regulating AI technologies, companies in the tech industry will face increasing pressure to adopt measures that ensure the safety of users, particularly vulnerable groups like children. The California regulations are a significant step in this direction, with other states likely to follow suit as the use of AI tools expands.