In a move to better protect its younger users, Instagram is ramping up its use of artificial intelligence (AI) to determine whether teens are lying about their ages on the platform. As part of a new set of measures aimed at creating a safer environment for its younger audience, Instagram’s parent company, Meta Platforms, is now proactively investigating teen accounts—even those that might have entered an inaccurate birthdate when they first signed up.
How Instagram’s AI Age Verification System Works
Meta has been using AI to estimate users’ ages for some time, but the social media giant is now taking a more direct approach. With this new system, Instagram is stepping up its efforts to identify users who may be misrepresenting their age. The AI scans for several signals to help determine whether the account truly belongs to a teenager, including the type of content the account interacts with, the profile information provided, and the date the account was created.
The technology is designed to flag any accounts that might belong to teenagers, even if their original birthdate didn’t match their actual age. Once an account is flagged, it will automatically be classified as a “teen account” and will be subject to additional restrictions that aim to make the platform safer for younger users.
New Restrictions on Teen Accounts: What’s Changing for Young Users?
The new AI-driven measures come with several significant changes for teen accounts on Instagram. For starters, teen accounts will now be private by default, meaning their posts, stories, and other content will only be visible to people they follow or are already connected with. Additionally, Instagram will limit access to “sensitive content,” including videos promoting dangerous activities or cosmetic procedures.
In a move designed to protect mental health, Instagram will also monitor the time teens spend on the platform. If a teen spends more than 60 minutes on Instagram, they will receive a notification encouraging them to take a break. For users who need more rest, the platform will activate a “sleep mode” between 10 p.m. and 7 a.m., which disables notifications and auto-replies to direct messages.
These new measures highlight Instagram’s growing concern over the well-being of its younger audience, as social media companies continue to face scrutiny regarding their impact on mental health, especially for teens.
Why These Changes Are Happening Now: Growing Scrutiny Over Social Media’s Impact on Teens
Meta’s decision to introduce these more proactive AI measures comes at a time when social media companies are under increasing pressure to take greater responsibility for the safety of younger users. With growing concerns about the impact of social media on mental health and self-esteem, especially among teenagers, the platform has come under fire for not doing enough to protect this vulnerable demographic.
In response to these concerns, Meta and other social media platforms are now pushing for app stores to take on a larger role in age verification, arguing that they should be the ones ensuring that only users of the appropriate age can access certain features. However, these proposals have faced legal challenges and have yet to be fully implemented.
How Instagram Is Engaging Parents in the Conversation About Online Safety
In addition to AI-driven restrictions on teen accounts, Instagram is also looking to engage parents in the conversation about online safety. According to Meta, the platform will send notifications to parents, offering them advice on how to talk to their teens about the importance of providing accurate age information online. This proactive approach aims to empower parents and help them guide their children as they navigate social media platforms.
“Parents will receive helpful information on how to have conversations with their teens about why giving the correct age online is important,” Meta said. The hope is that by involving parents, Instagram can foster more open communication about online safety and help ensure that younger users are better protected.
A New Era of Social Media Safety: What’s Next for Instagram’s AI Efforts?
As Instagram continues to refine its AI tools and age verification process, the platform is setting a new standard for social media companies when it comes to protecting younger users. These changes, which are still in the testing phase, represent a significant step forward in addressing the risks associated with underage users on the platform.
While it remains to be seen how effective these measures will be in curbing age misrepresentation and ensuring a safer environment for teens, Instagram’s AI-driven approach to age verification could be a key part of the solution. As the pressure on social media companies to take greater responsibility for user safety increases, Instagram’s efforts may set a precedent for other platforms to follow.