How to age guard a social media feed that isn’t for teen users: A case study on Meta, Facebook, and other social media platforms
The National Alliance on Mental Illness is one of the “expert resources for help” a teen will be directed to if they search for this type of content on Facebook orInstagram. Teen users also may not know if content in these categories is shared and that they can’t see it. The change will be coming to users under the age of 18.
teen accounts will be default to restrictive filtering settings that will make them look at a different kind of content on social media This change affects recommended posts in Search and Explore that could be “sensitive” or “low quality,” and Meta will automatically set all teen accounts to the most stringent settings, though these settings can be changed by users.
In November Bejar testified before a Senate Judiciary subcommittee and said that Meta failed to make its platform safer for kids because it knew about the harm it causes. His testimony came two years after Haugen detailed similar findings in the Facebook Papers.
Even beyond porn, lawmakers have signaled they are willing to age-gate large swaths of the internet in the name of protecting kids from certain (legal) content, like things that deal with suicide or eating disorders. There have been reports about how teens’ feeds are often filled with harmful content. But blocking all material besides what platforms deem trustworthy and acceptable could prevent young people from accessing other educational or support resources.
The Online Safety Act says that it will make the UK the safest place in the world to be online, but it has had critics say it could violate user privacy. The app Signal said it will leave the UK instead of collecting more data that could jeopardize user privacy.
The Age of Social Media: How to Detect a Teen Who Isn’t a Fan on Facebook, Instagram, or Twitter
A group of more than 40 states also filed lawsuits against Meta in October, accusing it of designing its social media products to be addictive. Their lawsuits relied on evidence from Facebook employees.
The move came as bipartisan group of federal lawmakers, led by Sen. Richard Blumenthal, D-Conn, increased their campaign for the Kids Online Safety Act to pass quickly. If passed, the legislation would hold tech companies accountable for feeding teens toxic content.
In May of last year the U.S. Surgeon General warned about the risks of social media for kids. He said the technology was helping with the mental health problems of young people.
A Meta spokeswoman acknowledged people can misrepresent their ages on Facebook and Instagram. She told NPR that the company is investing in age verification tools and technology that can better detect when users lie about their age.
“You do not need parental permission to sign up for a social media account,” Twenge says. “You check a box saying that you’re 13, or you choose a different birth year and, boom, you’re on.”
Jean Twenge, a psychology professor at San Diego State University and author of the book Generations, says this is a step in the right direction but that it’s still hard to police who is actually a teen on Facebook and Instagram.
The new policies come as Meta is facing dozens of state lawsuits, possible federal legislation and mounting pressure from child safety advocacy groups to make its social networks safer for kids.
“Now when people search for terms relating to suicide, self-harm, and eating disorders, we’ll hide those results and suggest expert resources for help,” according to Meta.