Anthropic, an AI startup, has revised its policies to permit minors to access its generative AI systems under specific conditions. This change was announced in a blog post on the company’s official site.
Under the new policy, Anthropic will enable teens and preteens to use third-party applications powered by its AI models, provided these app developers embed particular safety features and transparently disclose the use of Anthropic’s technologies. However, this policy does not necessarily apply to Anthropic’s own applications.
The company has outlined various safety protocols that developers should incorporate when creating AI-driven apps for minors. These include systems for age verification, content moderation, filtering, and educational resources that promote “safe and responsible” AI usage among younger audiences. Additionally, Anthropic plans to offer technical measures designed to customize AI experiences for minors, such as a “child-safety system prompt” that developers aiming at this demographic will need to implement.
Developers utilizing Anthropic’s AI models are also required to comply with relevant child safety and data privacy laws, including the Children’s Online Privacy Protection Act (COPPA), which protects the online privacy of children under 13 in the U.S. Anthropic will periodically audit these apps to ensure compliance, suspending or terminating accounts of developers who consistently breach these requirements. Moreover, developers must publicly declare their compliance on their websites or documentation.
“There are specific scenarios where AI tools can significantly benefit younger users, such as in test preparation or tutoring,” says Anthropic in their post. “With this in mind, our updated policy allows organizations to integrate our API into their products for minors.”
Anthropic’s policy shift occurs amid a growing trend of kids and teens using generative AI tools for academic and personal purposes. Competitors like Google and OpenAI are also probing more child-focused applications of generative AI. OpenAI recently formed a team dedicated to child safety and announced a partnership with Common Sense Media to develop kid-friendly AI guidelines. Similarly, Google has made its chatbot Bard, now rebranded as Gemini, available to teens in English-speaking regions.
A poll conducted by the Center for Democracy and Technology reveals that 29% of children have used generative AI like OpenAI’s ChatGPT to manage anxiety or mental health issues, 22% for friendship-related problems, and 16% for family conflicts.
Last year, numerous educational institutions banned generative AI apps such as ChatGPT due to concerns over plagiarism and misinformation. While some have since lifted these bans, skepticism remains regarding the positive potential of generative AI. Surveys like the one from the U.K. Safer Internet Centre indicate that over half of children (53%) have witnessed peers using generative AI negatively, such as creating false information or upsetting images (including pornographic deepfakes).
The call for guidelines on children’s use of generative AI is increasingly urgent. The United Nations Educational, Scientific and Cultural Organization (UNESCO) urged governments to regulate generative AI in education, including setting age limits and implementing robust data protection and user privacy guardrails. Audrey Azoulay, UNESCO’s director-general, clarified, “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice. It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”