AI startup Anthropic is changing its policies to allow minors to use its generative AI tools — in certain circumstances, at least.
Announced in a post on the company’s official blog Friday, Anthropic will begin letting teens and preteens use third-party apps — not its own apps, necessarily — powered by its generative AI models so long as the developers of those apps implement specific safety features and disclose to users which Anthropic technologies they’ve leveraging.
In a support article, Anthropic lists several safety measures devs creating AI-powered apps for minors should include, like age verification systems, content moderation and filtering and educational resources on “safe and responsible” AI use for minors. The company also says that it may make available “technical measures” intended to tailor AI product experiences for minors, like a “child-safety system prompt” that developers targeting minors would be required to implement.
Devs using Anthropic’s AI models will also have to comply with “applicable” child safety and data privacy regulations such as the Children’s Online Privacy Protection Act (COPPA), the U.S. federal law that protects the privacy of children under 13, Anthropic says. Anthropic plans to “periodically” audit apps for compliance, suspending or terminating the accounts of those who repeatedly violate the compliance requirement, and to mandate that developers “clearly state” on public-facing websites or documentation that they’re in compliance.
“There are certain use cases where AI tools can offer significant benefits to younger users, such as test preparation or tutoring support,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to incorporate our API into their products for minors if they agree to implement certain safety features and disclose to their users that their product is leveraging an AI system.”
Anthropic’s change in policy comes as kids and teens are increasingly turning to generative AI tools for help not only with schoolwork but personal issues, and as rival generative AI vendors — including Google and OpenAI — are exploring use cases aimed at children. This year, OpenAI formed a new team to study child safety and announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines. Meanwhile, Google made its chatbot Bard (since rebranded to Gemini) available to teens in English in select countries.
According to a poll from the Center for Democracy and Technology, 29% of kids report having used generative AI like OpenAI’s ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.
Last summer, schools and colleges rushed to ban generative AI apps — in particular ChatGPT — over fears of plagiarism and misinformation. Since then, some have reversed their bans. But not all are convinced of generative AI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use generative AI in a negative way — for example creating believable false information or images used to upset someone (including pornographic deepfakes).
Calls for guidelines on kid usage of generative AI are growing.
The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of generative AI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”
Read the full article here