Digital WellnessSocial Media
6 min read

The Regulatory Landscape of AI and Child Technology Products

Prepared for Boston Children's Hospital Digital Wellness Lab, this memo explains the regulatory landscape of AI and child tech products. Currently, the federal government has few regulations on the use of AI in child technology products. Lawsuits on AI chatbots, such as those involving Character AI, have made courts a regulator on AI and child tech products. In addition to lawsuits, state have proposed regulations for AI products, such as California’s LEAD for Kids Act. On a global level, entities, such as the European Union and the United Nations, also play a role in defining AI use with children.

The Regulatory Landscape of AI and Child Technology Products
Photo by Emily Wade / Unsplash

The federal government has few regulations on technology and children

On a federal level, there are relatively few protections in place for child use of technology. Currently, the Children's Online Privacy Protection Act (COPPA) restricts data collection for children under 13. However, the bill was passed in 1998, and the bill only focuses on minors under 13. The Senate is considering COPPA 2.0, which would expand the ages covered from under 13 to under 17. Previously, Congress also considered the Kids Online Safety and Privacy Act (KOSPA). The bill would give platforms a “duty of care” to mitigate harm to minors. While passed by the Senate, it was stalled in the House.

On AI, the federal government has mainly focused on addressing deepfakes. Congress recently passed the TAKE IT DOWN Act, which bans nonconsensual posting of AI or real explicit images of people. It would also require platforms to take down such content within 48 hours of notification. Besides the TAKE IT DOWN Act, the federal government has taken few concrete actions on AI and child safety.

Courts are increasingly involved in regulation of children and AI

In the absence of comprehensive federal regulation on AI and children, courts are becoming key players in regulating AI and youth safety. Recent legal cases confront the harms AI chatbots pose to children. In A.F. v. Character Technologies and Megan Garcia v. Character Technologies, the plaintiffs allege Character AI’s chatbot encouraged a minor to commit violence against their parents and encouraged a minor to commit self-harm. Both lawsuits also allege the chatbots had sexualized conversations towards minors. These cases emphasize how AI systems can manipulate young users in harmful ways. They also show how courts are now venues for litigating the impacts of AI products.

Outside of chatbot technology, a variety of plaintiffs, including school districts, state attorneys general, and individuals, have sued social media companies over their products. Large-scale lawsuits, such as MDL No. 3047 and the JCCP proceedings in California, are consolidations of cases filed by various plaintiffs that allege social media platforms like Meta, TikTok, and Snapchat use engagement-driven algorithms that directly foster addiction and psychological distress among youth. Cases like People of the State of California v. Sol Ecom, et al. address the production of deepfakes non-consensual and harmful content, demonstrating how privacy enforcement is beginning to intersect with emerging AI harms. These cases reflect how plaintiffs are using courts to hold tech companies accountable for their actions.

However, the technology industry is actively pushing back against regulations, arguing they are unconstitutional under the First Amendment and Commerce Clause protections. In NetChoice, LLC v. Bonta, trade groups claim that California’s Age-Appropriate Design Code Act is overly broad and limits free speech by restricting how platforms can present consent to minors. Similarly, in Free Speech Coalition, Inc. v. Paxton, the plaintiffs argue that Texas’ age verification law violates user privacy and constitutional rights.

These developments suggest that courts are becoming key players in the ongoing debate over AI and youth protection. As the use of childhood facing AI products expands, courts may be forced to determine what constitutes negligence or unsafe design in this space. The legal pressure emerging from these lawsuits could compel platforms to implement stricter age verification tools and rethink how product design affects user well-being.

States have taken the lead in regulating tech products

In the absence of sweeping federal laws, states have taken the lead in regulating the product design of AI technologies that interact with children. A growing number of laws focus on regulating the use of AI in child technologies and exploitative data practices.

States have proposed laws regulating the use of AI in child tech products. California’s proposed Leading Ethical AI Development (LEAD) for Kids Act would require tech companies to submit a risk level assessment for “covered products,” referring to AI products likely to be used by children, likely including Character AI’s chatbots. It would also create the LEAD for Kids Standards board to classify the risk level of each technology and promulgate regulations on the safety of such products. This approach would complement the now-blocked California Age-Appropriate Design Code (CAADCA) Act. The law introduced privacy-by-design obligations such as data minimization, mandatory age estimation, and stricter default settings for minors. However, technology companies sued to block the law, arguing it inhibited free speech.

Second, state bills are restricting data collection for minors, including in AI algorithms. Laws, like Texas’ Securing Children Online Through Parental Empowerment (SCOPE) Act, California’s CAADCA, and New York’s SAFE Act, all restrict the collection of minors' data. This includes collecting data for targeted advertising, algorithms, and other uses.

While these state laws differ in scope and enforceability, they all aim to regulate AI system design to prevent psychological harm, embed data privacy and security by default, and require developers to proactively assess risks. Although constitutional challenges around free speech may shape future enforcement, states are playing a pivotal role in AI regulation.

International entities are now focusing on AI and child safety

Outside of the US, international organizations and other countries have focused on AI technologies and their impact on children.

The United Nations High-Level Body on Artificial Intelligence is an important voice in AI regulation. In its report, it has called for child impact assessments when developing AI systems and incorporating the perspectives of children. The report also opposes the use of children as subjects for AI experimentation to protect children.

Similarly, the European Union has taken numerous steps to regulate AI. The EU AI Act prohibits any type of AI system from exploiting vulnerabilities, including those based on children’s age. The legislation doesn’t ban AI tech products for children, but guidelines from the European Commission clarify it restricts tech products with “exploitation and addiction-like practices that seriously harm children.” The European Commission’s guidelines offer two examples of prohibited child AI products. One example is an AI toy that encourages children to engage in dangerous physical activity, such as climbing furniture. A separate example is a game that uses AI to exploit child vulnerabilities and is highly addictive. In both examples, the products endanger a child’s safety and wellbeing. In a different vein, the Digital Services Act requires all online platforms accessible to minors to implement safety measures. These include ensuring a high level of privacy, assessing and mitigating any systemic risks from the platform.

In Australia, the government has introduced a Voluntary AI Safety Standard, providing guidance for responsible AI development and deployment. The safety standard is centered around 10 guardrails, including creating risk management protocols and ensuring transparency in AI development. Although not solely focused on children, the standard asks developers to contemplate the risk of their products with children. While these are voluntary guidelines, failing to follow guidelines may have legal consequences under other pre-existing product safety laws.

Conclusion

On the federal level, there has been very little regulation on AI in children's technology products, except regarding deepfakes and explicit images. Instead, courts and states have become key players. Globally, governments and institutions such as the EU and UN have played a role in AI and child tech product regulation. As AI development continues, governments will have to consider the impacts of AI on children.