Against Abdicating Power to Big Tech

We can’t rely on Big Tech to reign over online speech

Claire Bond Potter
A version of the California flag with "Twitter" replacing "California" in the banner's text section
Scott Beale / Creative Commons

In the weeks after thousands of social media-savvy reactionaries stormed the United States Capitol on January 6, Big Tech has slashed away at the communications infrastructure that helped fuel the insurrection. Social media companies have purged accounts associated with the far-right conspiracy theory QAnon and others promoting anti-government hashtags. The internet forum Reddit, where political trolling and conspiracy-mongering has flourished, took down “r/donaldtrump” and other subreddits containing violent political speech. Citing death threats against a wide range of people, Google, Apple, and Amazon effectively disabled the conservative microblogging platform Parler, where some MAGA organizing had taken place. And, most notably, first Twitter, then Facebook, Instagram, and YouTube booted Donald Trump from their platforms—while he was still president.

Conservatives have howled: They were being censored by Big Tech. The day after the Capitol riot, Republican Representative Marjorie Taylor Greene of Georgia, a fervent Trump ally and QAnon conspiracy theorist, tweeted: “The Silicon Valley Cartel’s ban of @realDonaldTrump is an attack on the free speech of all Americans. Next it could be you.”

“I was hoping,” I tweeted in reply, “That next it would be you.”

But is Greene entirely wrong? If it is conservatives who are de-platformed today, will it be activists on the left—who have their own trolls, violent rhetoric, and animus toward politicians who thwart them—tomorrow?

The corporations that make up Big Tech have made themselves central to modern communications and broadcasting, and tout themselves as a digital public square. But they have effectively no legal obligation to protect free speech, because they are private companies. These multi-billion dollar corporations don’t have to censor: Section 230 of the Communications Decency Act of 1996 means that they are not legally considered to be publishers or speakers. This means that individuals who post, not tech companies themselves, are responsible for content appearing on their sites.

Because they are not legally liable for content, tech companies also have no obligation to moderate, and are not required to be politically neutral. This allows them to host all content—including words that incite hate, violence, or mockery—without fear of consequences. Here’s what that means: Having been told in June 2020 that he could not sue Twitter over a parody account purportedly written by his cow, Representative Devin Nunes, a California Republican, is still suing @DevinCow, who is running a GoFundMe for their legal defense.

But as the MAGA takedowns underline, Section 230 does not create a free speech zone because, even as users are liable for their own words, it does not protect them from the companies that own social media sites and the servers that support them. If they want to boot you off their platforms, they can.

Big Tech’s cautious and belated attempts to rein in Trump and his allies have motivated conservatives, who have perhaps benefited disproportionately from Section 230’s protections, to call for its elimination. In the misguided belief that Section 230’s elimination would automatically expand—or as they perceive it, restore—First Amendment protections, politicians like Texas Senator Ted Cruz articulate the internet as a public, not private, space. Social media companies are, as Cruz charged, “the single biggest threat to free speech in America.”

People like Cruz want to repeal Section 230, it seems, to protect their right to disseminate falsehoods that misrepresent Democrats and bolster their support among conservative, conspiracy-oriented populists. Their reasons for repealing Section 230 are unsavory. But the bigger question they inadvertently raise concerns everyone: Do we want the corporate giants who have made billions from almost-unfiltered internet speech, rather than our elected representatives and our courts, to decide what our collective right to free speech will be?

And the government must assert its constitutional responsibility to regulate by creating cogent, comprehensive, and clear standards for the social media industry

In the United States, speech has never been fully free. Prior to the Civil War, slave states banned abolitionist speech, and during both World Wars, military censors—in the interest of national security—determined the limits of what could be said, broadcast, or printed. National security was viewed as a cultural issue even when the nation was not at war. Sexuality, race relations, crime, and violence in media were all regulated, formally and informally, by the courts, the Post Office, the Federal Communications Commission, and the Department of Justice.

Corporations have also played a role in regulating media speech. By 1934, the Federal Communications Commission made broadcasting licenses contingent on conduct, and the film industry—through the Hays Code and subsequently the Motion Picture Association of America—embraced self-regulation. Where Section 230 departs from this tradition is that the government plays no role, leaving technology companies as the sole arbiter of what speech will be allowed online. Now, the few start-ups that existed in 1996 have consolidated into a multi-trillion dollar global industry with a few players that are capable, as we have seen over the last five years, of having a decisive influence on politics.

Now, internet corporations are private companies. There is no inherent right to use them, and no obligation on the part of a platform to disseminate a message. This is what conservatives want to change—possibly by nationalizing the companies (socialism, anyone?)—and to make things more complicated, the messages many on the right wish to protect are divisive, and often false. Nevertheless, progressives have something at stake too. Since 2000, our entire political system has been reshaped around digital media. Apps, social media platforms, and credit cards have become central, not just to campaigning, but to the constant flow of information that tells us what government is and does.

This is true not only for outsider organizing but for mainstream democracy. We who canvassed, phone banked, fundraised, organized, wrote postcards, and promoted the Biden-Harris ticket on blogging platforms were all dependent on the same companies that have just taken down Parler. Canvassing software was built on Amazon Web Services and Visa- and Mastercard-processed donations. Organizers broadcast our own messages on social media platforms like Facebook, Instagram, and Twitter.

However, concern for democracy is not why it has taken Big Tech years to meaningfully restrict violent and hateful speech, despite the fact that they can. Restricting speech is a barrier to the value of these corporations. Long before Donald Trump emerged as a political figure in 201l by embracing the social media-driven conspiracy theory that falsely claimed Barack Obama was not born in the United States, internet companies profited from fake news and from the proselytizers that flooded their digital platforms with it.

Trump only took advantage of, and accelerated, the incentive for social media companies to make their platforms sticky—building a compulsion for users to return to and remain on a platform, using it longer, viewing more ads, and becoming ever-more engaged—through algorithms engineered to promote hate and fear. As our democracy crumbled after 2016, Big Tech profited. They profited from Trump using his social media accounts as an unmediated, truth-free channel, and they profited from the followers who reposted his lies, their own memes, and spurious attacks on his enemies. They profited from the internet organizing that produced the Unite the Right rally in 2017. They profited from legions of white supremacist Trump supporters who not only used social media platforms as tools for recruiting and radicalization, but also as a channel to advertise and sell faux military gear, flags, commemorative coins, gold bars, and—most recently—quack COVID-19 remedies.

The impact of these profitable lies is challenging to measure, but there is no question that they have distorted our democracy in the name of free speech. Trump’s own Facebook page —taken down on January 7 as it spread baseless claims of election fraud —had over 35 million followers, a figure nearly equivalent to the population of Canada. Intentional and malicious falsehoods do not constitute the free speech that our Constitution was written to protect, even if it often allows them; it is rarely illegal to deliberately lie to the public, as Donald Trump, his associates, and followers did about the outcome of the 2020 election. But it is a choice, and it is one that digital media corporations perpetuated invisibly, both in the moment and for the fifteen years that they have enjoyed the protections of Section 230. As private corporations, Big Tech firms have no legal obligation to publish or disseminate lies; in fact, currently, they have more power to regulate speech than the government does. Yet they continue to publish falsehoods. So what do we do?

First, we need to remember that social media is not inherently good or bad: It is engineered. Unlike traditional media, social media chooses you as much as you choose it. Algorithms are already deciding not who speaks but who—and how many—hear what has been said. Special interests can pay for wider and targeted distribution, but tapping into a user’s emotions—as Facebook learned in a 2014 study—doesn’t just spread ideas, it creates emotional contagion.

And algorithms that fuel hate, violence, and abuse are more profitable because of the stickiness associated with emotion—and negative emotions in particular. Emotion is utilized by tech firms to effectively alter human behavior to keep users returning to a platform obsessively, a feedback loop that the writer Jaron Lanier characterizes as not just the opposite of freedom, but also, as he puts it, “continuous behavior modification on a titanic scale.” People who stay on a platform longer can, in turn, be shown more advertising. More importantly, they can be radicalized in ways that wed them to internet platforms and leave more data behind. That data can both be sold and used to sell things to them: Danish furniture, yoga pants, zip ties, ammunition.

Policing speech is not the answer, because what is said is only a small part of the problem: The larger issue is how millions of speech acts and falsehoods are woven together to create an understanding of society, culture, and politics that is untethered from reality. The Federal Communications Commission—under the auspices of legislation passed by Congress—could, instead, oversee the ethical re-engineering of the choices Big Tech makes to attract and hold users. This would mean that, instead of invisibly tailoring suggested content through data already compiled on us and nudging us towards certain sites, users would (just as with privacy settings) have access to a map of their data profiles and would be reminded to check it periodically, with the aim of understanding and controlling their online choices.

Ethical re-engineering would also ask companies to profile content producers in the same way that they profile subscribers, making decisions about the likelihood that the content is ethical and reliable. Twitter has begun to flag falsehoods and ask users to read content before retweeting, a policy change that has been effective. It’s a good model for producer-oriented algorithms across platforms, which might someday present users with choices and flag the nature of those choices. I can imagine popup boxes that say: “This account has been associated with violent conspiracy theories and misinformation. Do you wish to proceed?”

We have good evidence that this is not a job that should be done by people: Human beings who do content moderation for Facebook report high levels of stress, anxiety and depression. As importantly, making decisions on an account-by-account basis, and relying on users to report offensive content and fake accounts, is cumbersome and ineffective. Even after de-platforming accounts associated with the insurrection on January 6, 2021, QAnon and election conspiracies proliferate on both Facebook and Twitter. Algorithms would allow social media firms to circumvent the speed at which misinformation reproduces, adding warnings—perhaps pending review—rather than removing content outright.

Second, platforms like Reddit that segregate potentially offensive or dangerous users into self-created fora cannot rely on members of those same communities to moderate them. Since the 1990s, gaming platforms and digital communities like 4chan and 8chan have become nurseries for hackers, trolls, and numerous bad actors who often state explicitly, and in advance, that they plan to act violently. In just one example, during the weeks before the attack on Congress, Michael Reyes—a member of the violent white supremacist group the Proud Boys—issued multiple threats against local officials on Parler.

They then use the same digital platforms to post manifestos and videos of their repellant acts, content that is redeployed to recruit and radicalize other extremists. In 2019, three separate mass shootings by white supremacists were linked to manifestos on 8chan, later removed by moderators but reposted there and elsewhere. And only hours after she was killed by a Capitol Police officer on January 6, videos of insurrectionist Ashli Babbitt—who was herself radicalized on the internet—were circulating on alt-right digital media and on YouTube, with messages that urged supporters to fight on in her name.

Third, social media companies need to develop a uniform publishing code, one that bars the sharing or posting of content created by organizations that are in the business of spreading falsehoods and disinformation. In 2016, Donald Trump would be elected president in part because of Breitbart publisher Steve Bannon’s advice to “flood the zone with shit”—a great deal of which was published on his own site. This is not new: Pornography sites are already regulated to ensure that they comply with the ban on sexually exploiting children; social media companies ban users below the age of 13; and internet service providers permit subscribers to block adult content. Internet publications and platforms would be licensed by the FCC, on a sliding scale geared to their profitability, and their licenses subject to review if they persistently published false information or incited violence.

Marjorie Taylor Greene, the QAnon-supporting congresswoman, is part of the problem, and she is wrong that privately-owned social media platforms have the same obligation to free speech that government-run public spaces do. But she is, perhaps perversely, right about one thing: Government needs to get involved. As a country, we seem to be allowing billionaire tech executives to decide when, and why, speech should be censored. But we are also permitting them to profit from the dissemination of hateful, false, and violent content that undermines democracy and a peaceful social order.

Do these companies want to keep that power and preserve the internet’s possibilities for creativity, community, and democratic governance? If so, they need to cooperate in reforming their practices, even if it cuts into their profits. And the government must assert its constitutional responsibility to regulate by creating cogent, comprehensive, and clear standards for the social media industry, and by making Big Tech use its power—this time, for good.

Claire Bond Potter is Professor of Historical Studies at The New School for Social Research, and co-Executive Editor of Public Seminar. Her most recent book is Political Junkies: From Talk Radio to Twitter, How Alternative Media Hooked Us on Politics and Broke Our Democracy (Basic Books, 2020).
Originally published:
February 1, 2021

Featured

Rachel Cusk

The novelist on the “feminine non-state of non-being”
Merve Emre

Books

Renaissance Women

A new book celebrates—and sells short—Shakespeare’s sisters
Catherine Nicholson

Fady Joudah

The poet on how the war in Gaza changed his work
Aria Aber

You Might Also Like

The Anti-Extinction Engine

Langston Hughes and my friend, the apocalypse actuary
Anne Boyer

The Heavy Air

Capitalism and affronts to common sense
Anne Boyer

Wildness

Feminism, identity, and the willingness to be defeated
Maria Tumarkin

Subscribe

New perspectives, enduring writing. Join a conversation 200 years in the making. Subscribe to our print journal and receive four beautiful issues per year.
Subscribe