Meta Platforms is enhancing safety measures on Instagram and Facebook to protect teenagers from unwanted messages, addressing concerns about teen online safety. This move follows accusations that Meta, formerly known as Facebook, was aware of teens encountering harmful content. Recent discussions with regulators prompted Meta, which also owns WhatsApp, to restrict certain content for teen users, reflecting a commitment to improving social media safety for kids.
The spotlight on Meta intensified when a former employee spoke to the U.S. Senate, alleging that the company knew about harmful incidents, such as harassment, on its platforms but wasn’t taking sufficient action. In response, Meta is implementing significant changes, such as automatically preventing teens on Instagram from receiving direct messages from non-followers and requiring parental approval for specific app settings changes. This empowers both teens and parents with more control over who can contact them.
On Messenger, stricter rules are being enforced for users under 16 (or 18 in some regions), limiting messages to those from Facebook friends or phone contacts. Notably, individuals over 19 cannot message teens who don’t follow them back. While these measures aim to create a safer environment on Facebook, their enforcement effectiveness remains uncertain, leaving the outcome contingent on future developments. Meta’s proactive steps underscore a commitment to addressing online safety concerns for teenage users.