Instagram today unveiled two new AI tools designed to protect young users of the platform.
The first feature prevents adults from sending messages to people under the age of 18 who are not following them. It then sends the adult user a notification that they cannot DM the account.
Instagram provided few details on how the system works, but parent company Facebook said in a blog post that it uses AI to infer the age of users:
This feature builds on our work to predict the ages of people using machine learning technology and the ages people give us when they sign up. As we move to end-to-end encryption, we are investing in features that protect privacy and keep people safe without accessing MD content.
The second feature sends prompts to teenage users encouraging them to be careful when interacting with adults they’re already connected to.
[Read: How do you build a pet-friendly gadget? We asked experts and animal owners]
The system first detects potentially suspicious behavior, such as an adult sending a large number of friend requests or messages to children. It then inserts a security notice into the recipient’s DMs and gives them the option to immediately end the conversation or to block, report or restrict the adult.
Instagram said the system will launch in select countries this month and roll out globally “soon.”
The company will also encourage teens with public profiles to make their accounts private by sending out notifications “highlighting the benefits of a private account and reminding them to check their settings.”
The new features aim to make young people safer on Instagram, which research shows was the most widely used platform for child grooming crimes during the first lockdown in England and Wales.
Instagram said it is currently evaluating other security measures, including additional privacy settings, and will provide more details about them in the coming months.
Published March 16, 2021 – 17:22 UTC