HomeFree speech in the US: Congress struggles to draw a line in...

Free speech in the US: Congress struggles to draw a line in the sand

Published on

Many of his critics celebrated when Twitter recently banned a former New York Times journalist labelled “the pandemic’s wrongest man.” However, others, including some who disagree with him, expressed alarm about a world in which private firms might stifle dissenters in today’s digital public arena by taking their cues from mainstream media and government officials.

Alex Berenson has grown his Twitter following to over 344,000 in the last year and a half by criticizing public health officials’ response to the outbreak. As is the case with a large number of Twitter pundits, he was irreverent and controversial. However, he frequently provided screenshots of statistics, charts, and scientific papers to support his views. 

His fans complimented him for bringing attention to difficult truths that few others did. Many researchers, journalists, and public health professionals, on the other hand, criticized him for cherry-picking scientific evidence to create dubious or even harmful storylines, most notably his assertion that COVID-19 vaccinations were not nearly as safe or effective as previously claimed.

Twitter sided with Mr Berenson’s critics on Aug. 28, permanently suspending his account after he tweeted that COVID-19 vaccines are at best “a therapeutic with a limited window of efficacy and terrible side effect profile.” The company cited repeated violation of its COVID-19 misinformation policies, and removed all his tweets from public view. Mr. Berenson is now writing mainly on Substack, where tens of thousands of his Twitter followers have migrated – many offering to contribute to his legal fees if he sues Twitter.

“I am up against basically the entire media, legacy and social, and the federal government,” says Mr. Berenson in an emailed comment, “and the only answer they had to the questions I raised was to cut off my access to a platform designed for free speech?

Nearly everyone agrees that misinformation on social media is a growing problem. But what, exactly, constitutes misinformation – and who should have the power to make that determination – is hotly debated.

Congress is increasingly wrestling with such questions as social media companies amass more wealth, power, and influence over public thought and discourse, with citizens increasingly getting news from algorithm-tailored feeds rather than traditional media outlets. And the pandemic has raised the stakes: Many now see the need to thwart misinformation as a life-or-death issue. 

Facebook: safety concerns trump expression

Facebook’s head of misinformation policy, Justine Isola, said earlier this year that when there’s a risk of imminent harm, that trumps concerns about freedom of expression. Many Democratic members of Congress agree.

“I’m on the side of trying to save people’s lives and make sure that companies are not profiting off of spreading dangerous misinformation,” says Sen. Ben Ray Lujan of New Mexico, who has co-sponsored a bill with Minnesota Sen. Amy Klobuchar that would increase social media platforms’ liability for spreading health misinformation in a pandemic if it is promoted by their algorithms. Senator Klobuchar says that platforms should deploy their employees to determine what’s true and not true, just like other media organizations, even if it’s a complex, time-intensive task. “I just think that they should be able to use part of their humongous profits to make sure we’re not getting misinformation,” she says.

But others have deep concerns about Congress requiring a handful of powerful private corporations to effectively censor viewpoints that contradict public health officials. The platforms’ misinformation policies already rely on statements by those officials to determine what is credible. 

“The United States government should not be leveraging its power and authority to try to make these tech companies arms of the state,” says Sen. Josh Hawley, a Missouri Republican and author of “The Tyranny of Big Tech.”

Critics say there is a clear pattern of bias against conservative viewpoints on social media platforms. On July 7, former President Donald Trump, who was banned from social media for violating their policies, filed class-action lawsuits against Facebook, Twitter, and YouTube, arguing they violated the First Amendment.

The First Amendment provides that “Congress shall make no law … abridging the freedom of speech.” Many legal scholars argue that since social media platforms are privately owned, they are not bound to allow freedom of speech. But there is ongoing debate about that. 

Daphne Keller, former associate general counsel for Google who now directs Stanford University’s Program on Platform Regulation, argues that most of the misleading information on social media platforms that is causing serious harm is protected by the First Amendment, so the government couldn’t require platforms to take it down.

“What many people think is the moral, socially responsible, right thing for platforms to do is something Congress cannot mandate,” she says. “The only way to get it done is for platforms to do it voluntarily.”  

To be sure,contrarians are not the only ones who have been wrong about COVID-19. Scientists, politicians, and journalists have also made assertions that turned out to be incorrect – and while they cite evolving science, critics see politicization at work, too, and say that’s the danger of platforms relying on official consensus for determining truth. 

They note that some things initially dismissed as “misinformation” were in fact later deemed worthy of investigation, most notably the hypothesis that the pandemic may have started with a lab leak in Wuhan, China. When in late May, President Joe Biden ordered the intelligence community to conduct a 90-day review of all available evidence on the lab-leak theory, Facebook changed its misinformation policy the same day. But meanwhile, investigators had lost more than a year in which to press China for answers.

Such premature labeling and dismissal of “misinformation” could interfere with the process of scientific inquiry – and that, too, could have deadly consequences, some argue.

“There’s a danger of groupthink, of mobbing people who dissent, and the last place you want that is in science,” says Philip Hamburger, a professor at Columbia Law School and president of the New Civil Liberties Alliance. 

What’s getting banned?

The scope of the challenge adds urgency. Facebook and YouTube have more than 2 billion users each, and far more content than any organization could review in real time; on YouTube alone, 500 hours of video are uploaded per minute, according to the most recent data available. If misleading information didn’t spread so quickly, it wouldn’t be nearly as much of a concern. And if a few tech giants didn’t control today’s digital public square, bans wouldn’t be so consequential. 

“They’ve now become gatekeepers to the public square,” says GOP Sen. Marco Rubio of Florida. “You literally cannot engage in political discourse in America if you don’t have access to those sites.” 

So what type of content do social media platforms ban? It ranges from “widely debunked” claims about the adverse effects of vaccines (Twitter), to content encouraging prayer as a substitute for medical treatment (YouTube), to claims that COVID-19 deaths are overstated (Facebook).

This summer, Twitter said it had suspended 1,496 accounts and removed more than 43,000 pieces of content since introducing its COVID-19 misinformation policies

YouTube, which is owned by Google, has removed more than 1 million videos since February 2020 that go against its standards

And Facebook has taken down more than 3,000 accounts, pages, and groups, and more than 20 million pieces of content that violated the company’s COVID-19 and vaccine misinformation policies, according to an Aug. 18 statement by Monika Bickert, vice president of content policy. 

Some of Facebook’s takedowns involved 12 individuals dubbed the Disinformation Dozen by the Center for Countering Digital Hate, whose recent report estimated that these influencers accounted for up to 73% of Facebook’s anti-vaccine content. Ms. Bickert disputed that assessment, which was based on a limited data set. 

Facebook has sought to automate content moderation. But it also works with more than 80 fact-checking organizations certified by the International Fact-Checking Network. In addition, White House press secretary Jen Psaki told reporters in July that the Biden administration was “flagging problematic posts” for Facebook.

Ms. Psaki’s admission prompted Senator Rubio to propose a bill that would require platforms to disclose within seven days any request or recommendation by a government entity to moderate user content, or face a fine of $50,000 per day of noncompliance.

More than 20 bills this year alone

Senator Rubio’s bill is just one of more than 20 bills introduced this year in Congress that target a key legal underpinning of social media platforms’ success. Known as Section 230, the provision protects social media platforms – and other “interactive computer service” companies – from being held legally responsible for user content posted on their sites, with a few exceptions. That protection gives them the ability to moderate content, such as restricting access to certain categories of content, including those they deem “obscene … excessively violent … or otherwise objectionable, whether or not such material is constitutionally protected.”

Democratic Sen. Ron Wyden of Oregon, a co-author of Section 230, defends it as crucial to enabling social media companies to address misinformation about COVID-19 vaccines.

“Why would you take away the one tool in law that allows an important participant – the platform – to take that garbage down?” he asks. 

But many note that the digital landscape has changed dramatically since 1996 when Congress passed the provision, which citedthe “true diversity of political discourse” offered by the internet and a desire to “preserve the vibrant and competitive free market” online. Both Mr. Biden and Mr. Trump called for revoking Section 230 in their presidential campaigns, and an increasing number of lawmakers see the provision as needing to be amended, overhauled, or scrapped altogether – though for widely varying reasons. 

Democrats want tech companies to take more action in cracking down on misinformation, as well as other content categories, such as hate speech. Republicans want to dial back what they see as censoring conservative viewpoints in the name of thwarting misinformation.  

Other solutions besides government regulation

While many in Congress are agitating for change, it’s unclear they can achieve the unity needed to pass new legislation. And some say government regulation isn’t the answer. 

“I think the problem with both Klobuchar and Hawley is they’re looking to government solutionsfor something that is a social problem,” says Neil Chilson, senior research fellow for technology and innovation at the Charles Koch Institute. “I don’t think we want government dictating to platforms or any other media channel what content they can carry, or how they should make the rules about what is truth on their platforms.”

Part of the challenge is that many social media users are not aware of how algorithms work behind the scenes to influence them. Platforms’ business models are based on maximizing user engagement with content – the more time users spend on the sites, the more platforms can profit by selling users’ attention to ad companies. And misinformation gets greater user engagement than accurate news. The German Marshall Fund found that user interactions with misinformation on social media spiked during the pandemic, and were far greater than average engagement with more than 500,000 news sites. Such misinformation often exploits emotions, leading some to see a systemic issue with social media platforms.

“Content that is engaging is very often content that is enraging,” says Laura Edelson, a software engineer and researcher at New York University’s Cybersecurity for Democracy. “What that means is you do not need to build a system to actively promote misinformation; you can build a system that optimizes for engagement alone, and that will end up promoting misinformation.”

Just how that works, and the role algorithms play, is something she has been trying to understand – until Facebook suspended her account last month for unauthorized collection of user data. She disputes the charge.

Photo by RINGO CHIU via Getty Images

Latest articles

Does This Mean We Stopped Being Animal and Started Being Human Due to ‘Copy Paste’ Errors?

A Surprise Finding About Ancestral Genes In Animals Could Make You Rethink The Roles...

The One Lifestyle Choice That Could Reduce Your Heart Disease Risk By More Than 22%

New Research Reveals How To Reduce Stress-related Brain Activity And Improve Heart Health Recent studies...

Aging: This Is What Happens Inside Your Body Right After Exercise

The concept of reversing aging, once relegated to the realm of science fiction, has...

Immune-Boosting Drink that Mimics Fasting to Reduce Fat – Scientists ‘Were Surprised’ By New Findings

It triggers a 'fasting-like' state In a recent study, scientists discovered that the microbes found in...

More like this

Does This Mean We Stopped Being Animal and Started Being Human Due to ‘Copy Paste’ Errors?

A Surprise Finding About Ancestral Genes In Animals Could Make You Rethink The Roles...

The One Lifestyle Choice That Could Reduce Your Heart Disease Risk By More Than 22%

New Research Reveals How To Reduce Stress-related Brain Activity And Improve Heart Health Recent studies...

Aging: This Is What Happens Inside Your Body Right After Exercise

The concept of reversing aging, once relegated to the realm of science fiction, has...