New York’s chronological social media feeds legislation could backfire. Here's why

Picture yourself scrolling through your favorite social media app, anticipating updates from friends, news and maybe some inspirational content.

But instead, you're bombarded with a mishmash of spam, hate speech and misinformation, simply because those are the latest posts.

This might sound like a digital nightmare, but it's a reality that could be around the corner for New York if state lawmakers pass legislation — S7694 and A8148 mandating purely chronological feeds on platforms like YouTube, Instagram and TikTok.

While the intention behind these bills is undoubtedly good — to protect teenagers from harmful content online their unintended impact is cause for serious concern. Research has shown time and again that purely chronological feeds, mandated by these bills, can have detrimental effects on our online experiences.

For instance, Meta’s internal tests, revealed by whistleblower Frances Haugen, found that under a chronological-only feed, meaningful social interactions dropped by 20%, and users hid 50% more posts. This means that not only would our online interactions become less meaningful, but we'd also be exposed to more unwanted content, including untrustworthy material.

The truth is that the algorithms social media companies use play a critical role in reducing harmful content and ensuring minors see age-appropriate posts.

A photograph taken during the World Economic Forum (WEF) annual meeting in Davos on January 18, 2024, shows the logo of Meta, the US company that owns and operates Facebook, Instagram, Threads, and WhatsApp
A photograph taken during the World Economic Forum (WEF) annual meeting in Davos on January 18, 2024, shows the logo of Meta, the US company that owns and operates Facebook, Instagram, Threads, and WhatsApp

For example, last year Snap announced that it would increase efforts to proactively detect inappropriate content and remove it from user feeds, that it would make it harder for strangers to contact teens by hiding some minors’ accounts in search, and that the platform would highlight resources, including help hotlines, when young people search for harmful content. All of those positive changes rely on the power of algorithms.

In addition to restricting beneficial algorithms, New York’s bills could inadvertently cut off access to vital support for teens from unsupportive families by creating more barriers to accessing digital resources.

Consider teenagers searching for support networks or information on sensitive topics like LGBT issues. New York’s digital legislation includes provisions that make teen access to online resources contingent on parental consent. But for teens whose parents don’t recognize their identity, outing a digital profile to parents carries serious risks.

In fact, polling released earlier this year by our organization shows that millions of parents would, if given the legal tools, restrict teens from accessing LGBTQ and reproductive health resources. According to our survey, 41% of adults said parents should stop teens from accessing content promoting LGBTQ communities, while 54% of conservative adults said that parents should restrict teens from viewing reproductive health information online.

While New York’s digital legislation is also intended to protect privacy for young people, the bill would end up exposing personal data for all users, regardless of age.

In order to meet the legislation’s requirements for young users, platforms would be forced to conduct age verification by collecting personal information and even biometric data. Sharing that data anywhere — especially if it’s required across digital media platforms — creates security risks for users and contradicts principles of data minimization. For New Yorkers online, the policies would require a tough choice: share sensitive personal information or forfeit participation in digital platforms altogether.

Opinion: Crack down on addictive social feeds. American teens' mental health is at stake

Bills like these also raise constitutional questions about editorial freedoms. Courts have consistently struck down similar laws targeting social media design in the past, citing First Amendment violations. By dictating online platforms’ editorial choices regarding how to display posts and by forcing users to choose between privacy and speech, these regulations risk infringing on core constitutional rights.

As lawmakers navigate the complexities of digital safety and content moderation, it's critical they keep in mind not only their goal of protecting young users online, but also the unintended consequences those users could see because of new requirements on when and how minors can access content.

Deciding what speech users see on the Internet and how they can see it is a far-reaching decision that deserves more thought and debate than it’s received so far.

While banning algorithms might be an easy talking point, it’s time for New York lawmakers to take on the hard work of exploring how to build up digital safety without reducing access to digital resources for teens and adults.

Adam Kovacevich is the founder and CEO of Chamber of Progress, a tech industry policy coalition promoting technology’s progressive future.

This article originally appeared on NorthJersey.com: NY legislation on chronological social media feeds could backfire