A public library after school, teens playing board games at a table, a phone basket on a counter, bookshelves behind, sunlight through windows, relaxed faces and conversation. Social media regulation pairs with offline institutions that make real community easier.

Less Feed, More Life

What would help Americans scroll less? Friction, privacy limits, and offline defaults could shift behavior at scale.

Download Print-Friendly Version

Here is the uncomfortable fact: most Americans now get their news from social and video platforms. More than TV. More than news sites and apps. Our public square has been quietly subcontracted to feeds tuned for time‑on‑platform, not truth‑seeking or neighborliness. We feel the cost in our bones—sharper extremism, thinner civility, cultural tribes that shout past each other, rumors that outrun corrections, and a steady undertow of loneliness. Especially for the young, the scroll isn’t just a pastime; it’s the water they swim in.

And the research is stubborn. When people use less social media, they hurt less. In randomized trials, trimming use to about thirty minutes a day lowers loneliness and depression; a one‑week break nudges anxiety down and well‑being up. The gains are modest, yes—but they’re real. Which means the real question isn’t whether less is better. It’s how to make “less” the easy choice for millions of people at once.

When people use less social media, they hurt less.

When Utah Governor Spencer Cox recently encouraged listeners to “touch grass,” it was in recognition of the fact that our online social media chambers are not helping our society, and they are not helping us individually. But there are powerful drivers pulling people back into the social media ecosystems, and well-meaning encouragement won’t help address the problem.

If the system is shaping us, then we have to reshape the incentives, its defaults, its hours, its business model. What follows are a few practical legal and social ideas that may help address the raft of negative consequences of social media. 

Refit Section 230: A safe harbor you keep only if you sail safely

Section 230 of the federal Communications Decency Act was built to keep platforms from being sued as the publisher for what users post, and to let the platforms moderate in good faith. Over time the shield has stretched to cover not just hosting speech, but how platforms distribute and rank it. That wasn’t carved into the Constitution; Congress wrote 230, and much of the expansion has come at the hands of well-meaning court rulings. But those court interpretations don’t have the broader picture that a legislature can. Congress can and should update Section 230.

The fix isn’t necessarily to blow up 230. That could invite chaos. But we could make the Section 230 shield conditional on predictable, speech-neutral design choices:

  • No immunity for paid placement. Ads and paid “boosts” should live under ordinary tort and consumer protection law, not inside 230’s blanket.
  • Narrow protection for risky amplification. When a recommender system actively pushes content, immunity shouldn’t apply. That’s an editorial decision, regardless of whether it is made by an algorithm or not. 
  • Reasonable design and transparency to keep the shield. Think chronological feeds and overnight quiet hours for minors by default, documented age assurance, and researcher access to basic risk metrics.

Why this matters: today’s largest platforms depend on two things—paid targeting and opaque, engagement‑maximizing ranking. If paid boosts lose 230’s protection, and if default friction becomes the price of immunity, the business math changes. Lawsuits won’t swallow the internet; the First Amendment still limits claims. But the near‑automatic shield over product design would no longer be unconditional.

Section 230 was created specifically to give internet platforms legal protections that don’t apply to other publishers. And without those additional protections, the social media regime that exists today could not survive, all without implicating the First Amendment even a little bit. 

Starve the Surveillance Ad Engine

Engagement‑hungry design exists because surveillance targeting is so profitable. If we limit the precision and persistence of tracking, then time on social media becomes less lucrative, and the perverse incentives drop. 

Europe is already proving the point: the Digital Services Act bans targeted ads to minors and profiling‑based ads that use sensitive data. Enforcement has forced real product changes (LinkedIn has already disabled a targeting tool in Europe). A U.S. version can go further while staying speech‑neutral.

A clean U.S. starting point is already on the books in California. The 2023 Delete Act (SB 362) requires the state to launch a single portal—DROP—by Jan. 1, 2026. Beginning Aug. 1, 2026, data brokers must check the portal at least every 45 days and purge the personal data of anyone who files a deletion request. If we were to adopt that same one-click ease to delete data across US states, we could start to see a big change.

Pairing data deletion with federal bans on both targeted ads to minors and the use of sensitive data for ad targeting, you drain much of the oxygen from engagement‑hungry feeds without restricting anyone’s speech.

When the ROI on hyper‑personalized ads falls, investors and product teams shift: calmer, subscription‑leaning models look better; contextual ads regain ground; feeds lose pressure to maximize time‑on‑platform at all hours. 

It’s not the advertising that is causing social media’s problems; it is the advertising that provides the funding that incentivizes social media platforms to cause problems and drag their consumers back over and over again, profiting off our worst instincts. 

Make Healthy Design the Default

Certain default settings make it extraordinarily easy to draw people back in. And without limiting individuals’ ability to use those settings if they prefer, we can pass simple laws requiring that interface defaults be high friction. For example: 

  • Forwarding limits. WhatsApp’s cap on forwarding already‑viral messages to a single chat produced a 70% drop in “highly forwarded” messages. 
  • Autoplay off. A randomized study of Netflix users found that disabling autoplay reduced session length and total watching. Autoplay is a sticky design pattern; switching it off by default trims use without banning anything.
  • Default chronological feeds and overnight quiet hours for minors. New York’s SAFE for Kids Act now bars algorithmic feeds for minors unless parents opt in, and blocks notifications between midnight and 6 a.m. The proposed rules detail how to verify age and consent. 

States could experiment with these rules, or Congress could nationalize these defaults by giving the FTC clear authority—building on its consumer protection powers—to set baseline attention‑safety standards for large platforms, especially for minors. This is still a far cry from having a large Surgeon General’s Warning each time you log into Instagram that says, “Social Media has been shown to lead to anxiety, depression, and loneliness.” But if we can’t make smaller changes to reverse this trend, that might be precisely what is needed.

These small design decisions bend millions of daily personal choices, without taking the choices away from the consumers. 

Make “Offline” the Default

There’s a fourth way to curb our dependence on social media that doesn’t require a single new statute: change what our institutions expect of us. When schools, workplaces, congregations, and community spaces set better defaults, people spend less time in the feed—because the offline choice becomes the easy choice. It’s culture. And culture often moves faster than law.

Schools can reclaim the school day with phone‑free policies—pouches or lockers, with clear exceptions for emergencies. Pair that with analog alternatives (board‑game tables, open gyms, music rooms, maker spaces) so lunch provides the engagement without the screen time.

Culture often moves faster than law.

Work can establish more durable boundaries. Adults didn’t invent being attached to their phone all night, they do it because they so rarely could disconnect from work. And then that gap was filled with doomscrolling and memes. Most offices can set quiet hours as a matter of policy where they will not contact you. Delay‑send features can effectively work so that after hour emails come in the morning. Changes as simple as printing agendas again can create a culture that does not keep us dependent on the phone. 

What is more durable than individual resolve is rituals. Congregations and faith groups can play a key role in helping de-escalate. For example, in 2018, President Russell M. Nelson invited Latter‑day Saint youth to a seven‑day social‑media fast, and later invited women to try ten days—framing abstention as a joyful reset of attention and purpose. Any congregation, club, or neighborhood can copy the pattern: announce a time‑bound fast, fill the gap with service and fellowship. These groups can also fill in the desire for connection that so often feeds the most unhealthy social media habits. 

“Third places”—places where you are allowed to exist without paying money—have seen a precipitous drop off. Often the easiest and most comfortable of these places are online. Not only can more congregational connection help this, other groups such as libraries and parks can find ways to engage, especially young people. And might I suggest the ancient and still relevant practice of breaking bread with one another face-to-face.

Less social media won’t come from one heroic law. It will come from a hundred ordinary decisions—repeated until they feel like the way things have always been. That’s culture, and ultimately it is what will help us turn around. 

Hard questions, honest answers

Skeptics will argue that these proposals flirt with censorship, invite doomed lawsuits, or amount to cosmetic fixes. It’s true that free speech doctrine sharply limits what states can do, and that even without Section 230, many claims will still fail on First Amendment or causation grounds. It’s also true that warning labels and nudges alone rarely change behavior. Those cautions matter.

But the core of my suggestions are different. It doesn’t tell platforms what they must carry or suppress. It focuses on distribution mechanics, ads, data, and design—areas where Congress clearly has authority to condition immunity or regulate trade practices in content‑neutral ways. And the record shows that friction rules do more than signal: forwarding caps have slashed virality, autoplay‑off trims viewing time, and randomized trials confirm that short breaks improve well‑being. These changes may not solve everything, but they move the needle in measurable, constitutional ways.

If we want less misinformation, fewer extremism incentives, better privacy, and less loneliness, we should stop pretending a perfectly disciplined thumb is the answer. Make healthier design the default. Our social media death spiral was created by our culture. And if we want to address it, we need to find a way to change that culture. Perhaps that will happen through laws to change the incentives. Perhaps it will take going after the culture itself. Now is not the time to wait for perfect answers. It’s time to start trying things. 

About the author

C.D. Cunningham

C.D. Cunningham is a founder and editor-at-large of Public Square magazine.
On Key

You Might Also Like

Subscribe To Our Weekly Newsletter

Stay up to date on the intersection of faith in the public square.

You have Successfully Subscribed!