Instagram will begin notifying parents if their teenagers repeatedly search for suicide or self-harm related content, marking the first time owner Meta has proactively flagged search behaviour rather than simply blocking it.
From next week, parents and teenagers enrolled in Instagram’s “Teen Accounts” supervision programme in the UK, US, Australia and Canada will receive alerts if a young user searches for harmful terms within a short period of time. The feature will be rolled out globally at a later stage.
Previously, Instagram restricted access to certain harmful material and redirected users to support resources. The new measure goes further by directly alerting parents via email, text message, WhatsApp or within the Instagram app itself, depending on available contact details.
Meta said the alerts are designed to flag sudden changes in search patterns that may indicate distress. Notifications will be accompanied by guidance and expert-backed resources to help parents navigate what are likely to be sensitive conversations.
The move has been met with sharp criticism from the Molly Rose Foundation, established by the family of Molly Russell, who died in 2017 aged 14 after viewing self-harm and suicide content online.
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.









