California Enacts Age Gate Law for App Stores and Operating Systems California has officially become the latest state to implement age gating for app stores and operating systems. Governor Gavin Newsom signed AB 1043 into law as part of a broader package of internet regulation bills aimed at protecting children online. The legislative package also includes new rules for social media warning labels, AI chatbots, and deepfake pornography. The State Assembly passed AB 1043 with a unanimous 58-0 vote in September. Notably, the legislation received backing from major technology companies including Google, Meta, OpenAI, Snap, and Pinterest. These companies supported the bill, stating it offers a more balanced and privacy-protective approach to age verification compared to laws enacted in other states. A key distinction of the California law is its approach. Unlike laws in states like Utah and Texas, children will still be able to download apps without requiring explicit parental consent for each download. The system also does not mandate users to upload photo IDs. Instead, the process is designed to be an age gate rather than strict verification. The concept is that a parent will enter their childs age during the initial device setup. The operating system or app store will then place the user into one of four age categories: under 13, 13 to 16, 16 to 18, or adult. This age category information is then made available to app developers. With this move, California joins Utah, Texas, and Louisiana in mandating some form of age verification for app stores. Apple has already detailed its plans to comply with the Texas law, which is set to take effect on January 1, 2026. The California legislation will become active one year later, on January 1, 2027. Another significant bill signed by Governor Newsom, AB 56, will compel social media platforms to display warning labels that inform children and teenagers about the potential risks of using their services. These warnings are required to appear the first time a user opens the app each day, again after three hours of total use, and then once every hour thereafter. This law is also scheduled to take effect on January 1, 2027. The new regulations also directly address concerns around artificial intelligence. California will now require AI chatbots to implement guardrails designed to prevent the display of self-harm content. Furthermore, these platforms must direct users who express suicidal thoughts to appropriate crisis services. Companies will be obligated to inform the state Department of Public Health about their strategies for addressing self-harm content and to report data on how frequently they display crisis prevention notifications. This legislative action follows lawsuits filed against AI companies like OpenAI and Character AI, which were accused of complicity in teen suicides. In response to such concerns, OpenAI recently announced its own plans to automatically identify underage users of ChatGPT and restrict their access. Additionally, SB 243 prohibits AI chatbots from being marketed as actual healthcare professionals. The law mandates that chatbots must clearly disclose to users that they are interacting with an AI and receiving artificially generated responses, not communicating with a human. For minor users, chatbot providers are required to remind them of this fact at least every three hours. Governor Newsom also signed AB 621, a bill targeting deepfake pornography, into law. This legislation introduces steeper potential penalties for third parties who knowingly facilitate or aid in the distribution of nonconsensual sexually explicit material. It allows victims to seek damages of up to 250,000 dollars per malicious violation of the law. In the US, the National Suicide Prevention Lifeline is 1-800-273-8255 or you can simply dial 988. Crisis Text Line can be reached by texting HOME to 741741. Wikipedia maintains a list of crisis lines for people outside of those countries.


