Listen to this article
Estimated 2 minutes
The audio version of this article is generated by AI-based technology. Mispronunciations can occur. We are working with our partners to continually review and improve the results.
Instagram said it would notify parents if their teenager repeatedly searches for terms related to suicide or self-harm within a short period, as pressure grows for governments to follow Australia’s ban on the use of social media for people under 16.
Instagram, owned by Meta Platforms Inc., said on Thursday it would start alerting parents who are signed up to its optional supervision setting if their children try to access suicide or self-harm content. The alerts will begin next week for those signed up in Canada, the United States, Britain and Australia.
“These alerts build on our existing work to help protect teens from potentially harmful content on Instagram,” the platform said in a statement. “We have strict policies against content that promotes or glorifies suicide or self-harm.”
Its existing policy is to block such searches and redirect people to support resources, Instagram said.
Governments are increasingly seeking to protect children from harm online, particularly after worries over the AI chatbot Grok, which has generated non-consensual sexualized images.
Britain said in January it was considering restrictions to protect children online, after Australia’s move in December. Spain, Greece and Slovenia have in recent weeks said they are also looking at limiting access.
Instagram is rolling out new teen accounts with enhanced parental controls and privacy features, but some parents say Meta still needs to do more to make the platform safe for young users.
In Britain, measures designed to stop access to pornography sites for children have had implications for adults’ privacy, and have led to tension with the U.S. over limits on free speech and regulatory reach.
Instagram’s “teen accounts” for under-16s need a parent’s permission to change settings, while parents can select an extra layer of monitoring with the agreement of their teenager. They also block teen users from seeing “sensitive content,” including those that are sexually suggestive or show violence.
