Instagram revamps restrictions on teen accounts

Instagram updated restrictions on teen accounts to be guided by PG-13 movie ratings to prevent teenage users from accessing mature and inappropriate content.

In 2024, Instagram introduced Teen Accounts to place teens automatically in built-in protections on the app. Last week, the social media platform announced additional updates to the accounts to only show teenagers content “similar to what they’d see in a PG-13 movie.”

Teens under 18 will be automatically placed into the updated setting and will not be allowed to opt out without a parent’s permission. The new restrictions ban users from searching inappropriate words and from following or messaging accounts with mature content.

Father Michael Baggot, LC, professor of bioethics at the Pontifical Athenaeum Regina Apostolorum in Rome, said “any change to help empower parents, protect their children, and restrict age-inappropriate content from them is a positive step forward.”

“However, I am concerned because there is quite a difference between static content like a movie that can be thoroughly reviewed by a committee and very dynamic conduct that is performed in social media,” Baggot said in an Oct. 20 interview on “EWTN News Nightly.” 

[embedded content]

Social media platforms include forms of cyberbullying, online predators, and artificial intelligence (AI) companions. “Those kinds of dynamic relationships are not necessarily regulated fully with a mere label,” Baggot said.

The updates follow feedback from thousands of parents worldwide who shared their suggestions with Instagram. After hearing from parents, Instagram also added an additional setting that offers even stricter guidelines if parents want more extensive limitations. 

“Parents have a unique responsibility in constantly monitoring and discussing with their children and with other vulnerable people the type of interactions they’re having,” Baggot said. “But I think we can’t put an undue burden on parents.”

Baggot suggested additional laws that hold companies accountable for “exploitative behavior or design techniques,” because they can “become addictive and really mislead guidance and mislead people.”

AI in social media 

Since Instagram recently introduced AI chatbots to the app, it also added preventions on messages sent from AI. The social media platform reported that “AIs should not give age-inappropriate responses that would feel out of place in a PG-13 movie.”

AI on Instagram must be handled with “great vigilance and critical discernment,” Baggot said. AI platforms “can be tools of research and assistance, but they can also really promote toxic relationships when left unregulated.”

Measures to restrict AI and online content are opportunities for parents and users “to step back and look critically at the digitally-mediated relationships that we constantly have” and to “look at the potentially dangerous and harmful content or relationships that can take place there.”

“There should be healthy detachment from these platforms,” Baggot said. “We need healthy friendships. We need strong families. We need supportive communities. Anytime we see a form of social media-related interaction replacing, distracting, or discouraging in-personal contact, that should be an … alarm that something needs to change and that we need to return to the richness of interpersonal exchange and not retreat to an alternative digital world.”

Read original article

Be the first to comment

Leave a Reply