California lawmakers are looking to introduce legislation prohibiting social media platforms from sending addictive feeds to minors and sending notifications overnight and during the school day. The nationwide move to create social media safety laws was sparked by a senate hearing earlier this year with the CEOs of several social media platforms like Meta, TikTok, and Discord on social media safety due to the increase in cyberbullying, mental health issues, and exploitation surrounding these platforms.
“There should definitely be age restrictions such as 13-plus, which can definitely help to stop child blogging, family vlog channels, and kid influencers, which is a form of child exploitation, because you’re forcing kids to work. … But I also think that there should be content regulation across all of the social media platforms. There are often a lot of people who are using things like TikTok and Instagram, as money-making platforms by using sexual or explicit content. There are people that are selling drugs online and those things can easily get into the hands of people under 18 because those people are allowed to be on social media.”
— Leilani Chen, 9th grade
“The age regulation by [lawmakers] is not effective because most teenagers usually lie about their age and make bad decisions on social media, so it’s hard to stop. Even nowadays, teenagers who are very young, such as 10 to 11 years old, have more access to social media. One way to stop that is by letting parents know the dangers of social media and their effects on younger children.”
— Asya Buyukcangaz, 11th grade
“I think the parents need to be properly educated on the risks of social media, and they should maybe know how to properly regulate their kids’ access because kids running wild on the internet is usually not the greatest thing. And if the parents are at least aware of it, they could do more to prevent it and at least regulate the kids’ access.”
— Caiden Soltesz, 12th grade
“They [social media platforms] have content filters for nudity, they have content filters for some political misinformation, but it’s difficult to have a filter for everything. And a lot of filters then become reactive. So you have to wait till there’s a problem in order for you to see it and then correct it. … But to actually make that happen and make those filters robust in a way that nobody can get through, you also need to have human intervention. And they [social media companies] don’t want to spend a lot of money to not only make the filters, but then making sure that the filters are working in the way that they want, which takes extra people to review it.”
— Christopher Bell, computer science teacher