26.7 C
Lagos
Tuesday, March 10, 2026

Instagram Will Now Warn Parents If Teens Search Self-Harm Content

Share this:

Instagram has announced a new safety feature that will notify parents if their teenager repeatedly searches for terms related to suicide or self-harm on the platform.

The feature, which is expected to roll out in the coming weeks, is designed to give parents information that could help them support their child and start conversations about potentially sensitive issues.

Currently, when users attempt to search for suicide- or self-harm-related content, Instagram blocks the results and instead directs them to support resources and helplines that may offer help.

How will the new alert work?

Under the new system, if a user with a Teen Account repeatedly searches for suicide- or self-harm-related terms within a short period of time, their parent or guardian will receive a notification.

These alerts may be delivered by email, text message, or WhatsApp—depending on the contact information available—as well as through an in-app notification.

READ ALSO:  Spain Probes X, Meta, TikTok Over AI-Generated Child Abuse Content

When parents tap the notification, they will see a full-screen message explaining that their teen has repeatedly searched for terms linked to suicide or self-harm within a short time frame.

The alert will also provide links to expert resources aimed at helping parents approach difficult conversations with their children.

Search attempts that could trigger the alert include phrases promoting suicide or self-harm, statements suggesting a desire to harm oneself, and direct searches for terms such as “suicide” or “self-harm”.

The notifications will initially roll out to parents using Instagram’s parental supervision tools in the United States, United Kingdom, Australia and Canada, with plans to expand the feature to other regions later this year.

Why is the feature being introduced?

The announcement comes shortly before the release of the Channel 4 documentary Molly Vs The Machines, which examines the death of 14-year-old Molly Russell.

READ ALSO:  Donald Trump Unveils Agency To Stop Scams Against Americans

Russell died in 2017 after months of viewing online content related to self-harm and suicide.

According to reports cited by The Standard, she had saved, liked or shared around 16,300 pieces of content on Instagram in the six months before her death, including more than 2,000 posts connected to self-harm, depression and suicide. She had also searched for similar material on Pinterest.

Both platforms now block such content from appearing in search results. Posts that encourage suicide, self-injury or eating disorders are also removed.

The move also comes amid increasing regulation of social media platforms. In 2023, the Online Safety Act came into force in the UK, introducing rules designed to improve online protections for both children and adults.

Under the legislation, social media platforms and search services must take steps to prevent children from accessing harmful or age-inappropriate material and provide clear ways for users to report issues.

READ ALSO:  Android Users Can Now Turn Speech Into Formatted Text With Wispr Flow

Companies that fail to comply could face fines of up to £18 million or 10% of their global revenue, whichever is higher.

Vicki Shotbolt, chief executive of the charity Parent Zone, welcomed the new feature.

“It’s vital that parents have the information they need to support their teens,” she said.

“This is an important step that could help give parents greater peace of mind. If their teen is actively searching for harmful content on Instagram, they will now be aware of it.”

Meta Platforms, which owns Instagram, said it is also working on similar parental notifications related to teenagers’ interactions with artificial intelligence tools on the platform.

Share this:
RELATED NEWS
- Advertisment -

Latest NEWS

Trending News

Get Notifications from DDM News Yes please No thanks