Meta launches new teen safety features, removes 635,000 accounts that sexualize children
Instagram parent company Meta has introduced new safety features aimed at protecting teens who use its platforms, including information about accounts that message them and an option to block and report accounts with one tap.
The company also announced Wednesday that it has removed thousands of accounts that were leaving sexualized comments or requesting sexual images from adult-run accounts of kids under 13. Of these, 135,000 were commenting and another 500,000 were linked to accounts that “interacted inappropriately,” Meta said in a blog post.
The heightened measures arrive as social media companies face increased scrutiny over how their platform affects the mental health and well-being of younger users. This includes protecting children from predatory adults and scammers who ask — then extort— them for nude images.
Meta said teen users blocked more than a million accounts and reported another million after seeing a “safety notice” that reminds people to “be cautious in private messages and to block and report anything that makes them uncomfortable.”
Earlier this year, Meta began to test the use of artificial intelligence to determine if kids are lying about their ages on Instagram, which is technically only allowed for those over 13. If it is determined that a user is misrepresenting their age, the account will automatically become a teen account, which has more restrictions than an adult account. Teen accounts are private by default. Private messages are restricted so teens can only receive them from people they follow or are already connected to. In 2024, the company made teen accounts private by default.
Meta faces lawsuits from dozens of U.S. states that accuse it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.
By BARBARA ORTUTAY
AP Technology Writer