3 Minutes
Meta is adding parental controls to Instagram's AI chatbot, aiming to give caregivers more oversight of how teens interact with generative AI. The move comes after reports of minors engaging in romantic and inappropriate conversations with chatbots, prompting a safety-focused response from the company.
Why Meta is tightening the reins
Imagine a teenager chatting with an AI late at night — harmless at first, but conversations can drift. Meta says these new tools are a response to that risk and part of broader efforts to balance innovation with safety. Adam Mosseri, Meta’s chief AI officer, confirmed the forthcoming features in a recent update, noting the company wants to keep helpful AI available while reducing potential harms.
What parents will be able to do
The controls — rolling out early next year — give parents several options. They will be able to block access to specific AI characters or turn off Meta AI for their child entirely. In addition, parents will receive summaries of the topics their teens discuss with the chatbot, offering insight without exposing full transcripts. Mosseri said more technical details will be shared soon.

Not all responsibility is shifting to families
Meta emphasizes it won’t simply offload safety onto parents. The company plans to keep the chatbot available to teens with built-in, age-appropriate protections. That means moderation and guardrails will remain part of the experience even when parental controls are enabled — a hybrid approach that mixes automated safeguards with caregiver oversight.
Where and when these controls will appear
Initially, the new parental controls will be introduced on Instagram only. The first phase targets English-speaking users in the US, Canada, the UK and Australia, with expansion to other regions and Meta platforms planned later. For families in those countries, expect the rollout to begin early next year.
What this means for parents and teens
For parents, the features offer clearer, actionable steps to manage a teen’s exposure to AI conversations. For teens, the change aims to preserve access to useful AI features while curbing interactions that could be inappropriate or emotionally risky. It’s a pragmatic step toward accountable AI use in social apps — one that acknowledges both the benefits and the blind spots of chatbots.
Meta’s update is a reminder that as AI moves from labs into everyday apps, the tools to govern its use must evolve too. Expect more details from Meta in the months ahead as the company finalizes how summaries, blocks, and protections will work in practice.
Source: gsmarena
Leave a Comment