On April 1, 2021, Senate Bill 12 authored by state Republican Senator Bryan Hughes passed in the Texas Senate. Though its passage out of the Senate ironically coincided with April Fools Day, Senate Bill 12 is no laughing matter. The bill seeks to prevent social media companies like Facebook, Twitter, and Instagram from removing posts or users from their platforms on the basis of their expressed opinions or views. While the bill died on the Texas House floor, its implications could be far reaching.
Senate Bill 12 is not alone. Similar legislation is advancing through state legislatures around the country, or has already become state law. With that in mind, what lies in store for social media companies and their users should this legislation hold up against legal challenges?
“Withholding information is the essence of tyranny. Control of the flow of information is the tool of the dictatorship.” – Bruce Coville
For years, conservative legislators across the country have sought to regulate social media companies. The current changes and outcry have been driven by Facebook, Instagram, and Twitter’s decisions to block former president Donald Trump from their platforms following the attack on the U.S. Capitol on Jan. 6. Facebook’s recent decision to continue to ban Trump for another two years has only increased calls for regulation by government bodies to a fever pitch.
The tech giants and social media moguls have been under increasing scrutiny in recent years. CEOs of Facebook, Apple, Google, and Twitter have appeared before congressional hearings to answer questions about the roles of their companies, what algorithms they implement and how they affect the user experience, and their potential monopolistic practices. In large part, much of this scrutiny stems from the incomprehensible reach and influence of social media platforms. In a sense, state Senator Brian Hughes was correct when he refers to social media as “the modern public square” in his bill’s Statement of Intent. Social media has evolved to provide public officials with a forum for direct engagement with their constituents. No elected official has demonstrated the power and reach of social media better than Donald Trump, who effectively used his Twitter account to rile up supporters, opponents, and the media alike.
In spite of challenges to regulation, including a distinct lack of knowledge among lawmakers of how social media and tech companies work logistically, almost two-dozen Republican-controlled state legislatures have introduced legislation allowing individuals to sue social media companies for blocking, deplatforming, demonetizing, or removing posts based on personal opinions.
Many conservative thinkers and politicians are quick to point out apparent “censorship”, not only by social media platforms themselves, but also by Internet Service Providers (ISPs) that host websites. One common example being touted is Amazon’s suspension of Parler from its servers in the wake of January 6, shutting down the conservative social media platform. Many conservatives seem to believe this is the beginning of a “civil war” among tech companies and social media platforms. Even most Americans believe that social media platforms censor political views, despite research that discounts this belief. In fact, studies suggest that more extreme political posts, pages, and accounts on either side of the political spectrum receive greater engagement from users. The algorithms of several platforms, including Facebook, actually encourage more extreme content to a degree, increasing engagement and views, according to CEO Mark Zuckerberg.
The widespread belief that social media companies censor political views has led to the current flurry of legislation attempting to regulate content moderation policies of social media platforms. This raises some important questions about First Amendment protections, Section 230 of the Federal Communications Code and liability to companies, and the potential for government overreach into freedom of expression.
Big Tech: Too Big for Regulation?
While many legal scholars support the notion that social media platforms are protected totally under the First Amendment’s provisions, debate persists and legislation is being advanced. Convincing arguments are being made both in defense of the right of social media companies to moderate their content however they like, and also for expanding the First Amendment to include social media giants and tech moguls in its protections against censorship. In a move that will likely stir debate even more, Facebook recently announced they would no longer make exceptions for political officials in moderating their posts. This was a reversal of their position from two years prior, where they defend politicians’ and elected officials’ speech as “newsworthy” and thus exempt from the moderation normal users experience.
While seemingly a benign decision on its face, there could be some significant implications to Facebook’s rule change. Because so many individuals use sites like Facebook as a way to interact with their elected officials, blocking or taking down posts that these officials make could make it difficult for constituents and political opponents to hold politicians responsible for controversial statements that may violate content guidelines. Donald Trump demonstrated this very effectively. His Twitter page generated tens of thousands of interactions, including likes, comments, retweets, and quotes every time he posted. It was also the platform that many officials, cabinet appointees, and National Security Directors discovered they would need to seek new employment through. Many of his posts almost certainly violated the community guidelines, but had they been blocked from being posted or taken down shortly thereafter, many people would not have had the chance to see his true thoughts on many subjects. A few examples that come to mind are his comments blatantly labeling migrants “rapists and murderers,” lies about election fraud and illegal ballots, and significant slander against other political figures, including ones who served in his staff.
The courts and Section 230 of the Communications Decency Act uphold that social media companies may moderate content on their platforms and cannot be held liable for content posted on their platforms. Many of these companies have taken it upon themselves to moderate hateful and violent posts, as well as ones that spread misinformation or falsehoods, albeit with varying degrees of effectiveness. This section has drawn ire from many GOP legislators around the country, including those in Texas, attempting to stop what they view as a conservative censorship on social media. Many proposals, including SB 12, allowed people to sue and hold liable social media companies for content posted in their state of residence, bringing an ideological and logistical conflict over Section 230 to the forefront. On one hand, Section 230 and the First Amendment guarantee that social media companies, as private entities, have total jurisdiction over their content moderation policies. On the other hand, these platforms serve as a public square where elected officials, public figures, and ordinary people can all interact and communicate with one another.
However, just because public officials choose to utilize a private service for communication with constituents, or even as a platform to amplify their views or brand, that does not automatically entitle them to never being prevented from using that bullhorn. As Donald Trump proved, no matter if you are the President of the United States, one of if not the most powerful person on the planet, if you post content in violation of the standards of a private platform, your bullhorn can be taken away from you. And that is what the First Amendment guarantees: private entities can block or deplatform users of a private platform.
In Memoriam, Public Discourse
Countless studies have demonstrated how social media algorithms reinforce extreme opinions and promote content that the user will like. The existence of echo chambers in social media feeds and timelines is nothing new, and companies are taking few steps to address it, because it’s an issue of user engagement and advertising revenue for companies, not an issue of truth or promoting public discourse. Of course, one could easily say that 240 characters is a poor forum for discourse to begin with. If anything, the desire for instant gratification and quick-access content in social media has led to a deterioration of discourse and conversation, and a jump to mud-slinging. It remains to be seen if social media platforms will continue to take on more active roles in regulating what content government officials and politicians post.
The question of regulating social media will likely continue to be a contentious one into the future, and it’s unlikely that current attempts at regulating content moderation will succeed against legal challenges on First Amendment grounds. The future is unclear, and it certainly isn’t pretty. While elected officials on the state and national levels will continue bickering over Twitter about policy priorities, and monologing in Congress about the dangers of Big Tech, one thing is clear – it’s up to us to decide the future of how social media is used. Will it continue to be a place of division, contention, and contrite political whacks at those who disagree, or if it can be transformed into a platform where citizens can feel heard and understood by their elected officials and their fellow citizens. A societal reckoning will resolve the issue in our collective hands, and no amount of bickering on Capitol Hill will get us there.