Author: Matt Liu, University of Wyoming Director of Debate
I’ve had the opportunity to do a little research on the new PF topic. These are my first thoughts about the January 2024 PF resolution:
Resolved: The United States federal government should repeal Section 230 of the Communications Decency Act.
In short, Section 230 provides immunity for online computer services with respect to third-party content generated by its users. Translation: Facebook et al can’t get sued for almost all content that is shared by its users. Section 230 was part of the Communications Decency Act, the rest of which was tossed out by the courts as unconstitutional (ironically the rest of the law was very anti-speech and anti-internet, Section 230 was a last-ditch amendment to try and preserve the burgeoning internet by some tech-friendly Representatives).
What kinds of things might folks sue over if Section 230 was repealed? Well, Democrats argue that Section 230 allows hate speech and misinformation to go unchallenged. Conservatives argue that Section 230 provides platforms immunity for ideological biases (the GOP loves to argue that Big Tech’s platforms have a liberal bias).
Read the complete article below the fold.
It’s important to note that there are already some exceptions to Section 230. Providers are required to remove material that’s illegal on a federal level, such as material that engages in privacy violations or copyright infringement. As a result of the FOSTA-SESTA law, material violating federal and state sex trafficking laws is also excepted from Section 230’s immunity.
Section 230 has its origin in a pair of lawsuits that motivated Congress to act. In the first, Cubby Inc. v. CompuServe Inc., the courts held Cubby could not be held liable for the actions of users because the company wasn’t doing anything bad (the users were). In the second, Stratton Oakmont v. Prodigy Services, the courts came to the opposite conclusion, holding that because Prodigy moderated forum posts they were responsible for user-generated content. This led to the conclusion that any content moderation made a company liable for all user-generated content. In response, Congress introduced Section 230 in 1996. Now Senator (then Representative) Ron Wyden argues that Section 230, his amendment, is a sword and shield for companies, allowing them to have protection from lawsuits but also to moderate their websites as they see fit. This origin story is more important than you think, because it might inform what the world looks like after a repeal of Section 230.
What Does a World Without 230 Look Like? Repeal vs. Reform vs. Replace
If a world without Section 230 looks like the world before Section 230, then companies that engaged in zero content moderation would not be liable for user-generated content (Cubby, Inc), but companies that engaged in any content moderation would be liable for anything their users posted (Prodigy Services). So, would we get a Wild Wild West of no content moderation or would all of your social media posts have to be approved by a cyber nanny? Or something in between? This question is incredibly important for determining whether the aff solves and what links to negative positions look like.
Many argue that companies would have an economic incentive to do content moderation, because if every website and app looked like 4chan no one would want to use them. That seems true, but if companies put themselves on the hook without Section 230, subjecting themselves to liability for user-generated content, they might be likely to engage in overzealous enforcement. If you’re liable for user-generated content, you’re going to be very careful about what that looks like. The danger is that in the drive to avoid liability, companies might err towards being too cautious and restrict too much speech. It’s also possible that companies would give up on content moderation to avoid liability entirely, especially smaller platforms and start-ups that couldn’t afford to do content moderation on the scale of larger companies. There’s also an interesting debate about a shift (even) more towards AI and algorithmic moderation, as content moderation at scale would be impossible for humans to do.
That said, this analysis seems divorced from real world discussions of Section 230 and the literature on the subject. Why? Because basically no one argues for just a repeal of Section 230. They either argue for reforming Section 230, or replacing it. There are dozens if not hundreds of reform/replace proposals, but one of the most common is to argue that companies should have a duty of care. This is a carrot-and-stick approach that would tie immunity to a reasonable standard of content moderation. If companies are making a good faith effort to moderate content, then they will still get immunity.
I think a core topic dynamic is going to be the fight between the aff and the neg over whether reform/replace is aff ground or neg ground. The aff can argue that repeal of Section 230 would result in some form of replacement with a distinct but manageable legal regime, as repeal alone is politically untenable. The neg will argue that anything short of repeal is neg ground, and that reform is less than repeal. I suggest being highly prepared for these debates.
I think the strongest aff arguments are about how social media corrupts democracy. There are a variety of link and internal link arguments about this, including that Section 230 promotes misinformation and contributes to echo chambers and filter bubbles. Misinformation could also be impacted in other ways, including public health misinformation that impacts disease and pandemic response (the neg should push that things like vaccine misinfo are “lawful but awful,” so legal immunity is irrelevant because that content isn’t illegal, but this argument is not insurmountable by the aff).
There’s also potentially a module about the important of local news. Critics of Section 230 argue that its blanket immunity has made tech social media too powerful, displacing traditional media and especially local news. There are some very strong arguments about the benefits of strong local news and investigative journalism, and it’s a strong internal link to democracy arguments.
There are a variety of other areas to mine for aff ground, but I find them all to be substantially weaker. Contemporary Section 230 critics love to talk about crime, including drug crimes, and terrorism, but it’s very unlikely that Section 230 would have a meaningful impact in these areas. Terrorism has received disproportionate media and scholarly attention due to several court cases about Section 230 and terror propaganda though, so I expect it to be a common contention. As mentioned above, the left case against Section 230 involves opposing hate crimes and discriminatory discourse. Other areas include right-wing extremism / alt right echo chambers and antitrust/monopoly (critics contend that the legal protections of Section 230 contribute to the dominance of major tech companies). As mentioned in the introduction, there’s also a right-wing case about the political bias of Big Tech. I don’t suggest following that route though.
A major aff burden is going to be figuring out what kind of solvency argument they want to advance. What does content moderation look like after repeal? To win almost any advantage, the aff will need to argue that companies engage in content moderation. However, to avoid big neg offense the aff will want to say that that content moderation isn’t too overzealous. This gets into the repeal vs replace vs reform discussion from above. I think the aff is well-positioned to claim not just solvency but advantages from arguing about what comes after Section 230. I know that I will be pushing teams to explore defending a duty of care emerging as a result of repeal.
One area I didn’t explore is the international and extraterritorial implications of Section 230 repeal. The college antitrust topic had great literature about the importance of international, and particularly EU, harmonization of policy. That’s true of many tech topics, and I wonder what international ground there might be on this topic. I also would love to explore how Section 230 is enforce extraterritorially- what implications does it have for international companies? I didn’t look into this, but I’m interested by it.
NEG GROUND (TECH)
Section 230 is often hailed as the “26 words that created the internet.” As such, it has advocates that range from Big Tech to leftist activists. But it’s certainly Big Tech that has the largest vested interest in defending Section 230, and Big Tech is very good at promoting its interests. Because of that, a tech / innovation disad is likely going to be a common negative contention. Now on face, this is not the strongest argument. Facebook has to moderate posts so they shut down? While it’s true that the scale of big social media is incalculable, we all know moderation would be algorithmic, managed by artificial intelligence and machine learning, easing the burden of moderation.
But there are some more compelling arguments that Section 230 is essential to smaller online platforms and tech start-ups. Wyden argues Section 230 repeal “will kill the little guy, the startup, the inventor, the person who is essential for a competitive marketplace. It will kill them in the crib.” Centering a negative argument on start-ups that are more willing to experiment with new features and services but would be unduly burdened by a Section 230 repeal seems like a smart argument. You’d be making the argument that smaller platforms can’t survive or thrive in a world of increased litigation. I think this is a decent argument because you don’t have to win that start-ups can’t moderate content, just that they can’t afford endless lawsuits. I’d be using search terms like innovation, start-ups, economic growth, competitiveness, technological leadership, litigation, and experimentation. You want to find the claim that being able to try new things without fear of legal repercussions is key to innovation and allowing these companies to scale and expand globally.
What about the impact level? There’s copious literature on the benefits of start-ups and American tech innovation. The argument Facebook likes to make when Zuck is called to testify before Congress is that keeping American tech companies strong is the only way to stay ahead of China. There’s a ton of lit on this claim, and it’s a popular debate argument.
There are great articles supporting all these arguments (there’s always great articles when Big Tech’s interests are at stake). But I am a little worried there’s a missing internal link. I think the aff should push back that innovation in user-generated content is not the kind of innovation that’s important for the neg’s impact claims. Why are Facebook posts and tweets key to beat China? Color me skeptical. That’s why I think a smart neg contention, in addition to reading the very good tech innovation and China args, should branch out into some more quirky impact areas.
First, there’s some literature about Section 230 and the digital divide. The argument I have seen is that internet platforms will pass the cost of content moderation onto their users, which means a lot of free services could become pay-to-play. While the costs might not be very much, they could drive off users with limited disposable income, widening the digital divide.
Second, there are plenty of “open internet good” advocates who write prolific and powerful articles about the benefits of unrestricted speech on the internet (like this, this, and this). Open internet advocates make bold claims about the internet as a force-multiplier to address all kinds of challenges. This is a little goofier than the classic “tech good – beat China” argument, but, it has a better connection at the internal link level, since these arguments are often built on a “crowdsourcing ideas good” warrant that actually has a 1:1 link to the user-generated content at stake in Section 230. This ends up looking like a pretty good neg argument to me.
NEG GROUND (OTHER)
I think court clog will be a potential negative argument on this topic. Removing tech platforms immunity for user-generated content will generate a flood of litigation. Court clog is the way that we see debaters argue that increased litigation is problematic: that overwhelming the courts with endless litigation stops them from hearing other important issues. This is essentially a courts tradeoff disad (I know, I know, I use policy lingo like disad and aff/neg and this is PF so it should be contention and pro/con, but it's all the same. Forgive me for a cheap court clog pun, but 'so sue me' :).
Another argument that interests me is an agency tradeoff disad (contention). Would the FTC be responsible for handling internet platform enforcement after Section 230 repeal? Well, there is a quiet antitrust revolution happening under the Biden administration, led by Lina Khan at the Federal Trade Commission (FTC). Section 230 would create a massive government enforcement burden. If internet platforms lost their blanket immunity, it could shift government resources to monitoring content moderation. I know this argument works and is solid because it was a staple of the college antitrust topic. The literature that FTC resources are limited and new priorities cause tradeoffs is copious and high quality. Arguing that these agencies are focused on antitrust now creates excellent ‘antitrust good’ impact ground. It’s alternatively possible that the Federal Communication Commission might be the agency responsible for enforcement after a Section 230 repeal. In that case, an FCC resource tradeoff disad would be the move.
There are also some very good negative arguments about differential enforcement, that Section 230 repeal will result in disproportionate content policing of marginalized people.
There are also surprisingly good reproductive freedom neg cards. The argument here is that as red states criminalize abortion and even information about how to get an abortion, social media platforms will err towards censoring all abortion content to avoid litigation in red states. So the politics of Texas might control access to information about reproductive freedom for the entire country. I think this is an especially strong argument because it gets into how and why enforcement would actually play out, and with warrants. You can find some examples of articles on that here and here.
I think the most common negative contention in Wyoming will be free speech. I’m not a fan. I don’t like how the impact debate plays out. Also, the First Amendment concept of free speech is that the government cannot restrict your speech. Tech companies deciding what users can post on their platforms is not the government infringing on speech. I’m okay with the negative arguing that it’s bad that there will be less speech with a Section 230 repeal. But I need that to be impacted with a consequentialist explanation of why that speech is important, not an abstract defense of unlimited speech.
We’ll definitely see some neg teams say we should reform not repeal. As I said above, there will be heated debate about whether reform is aff ground or neg ground. I am a little amused I know these debates will happen, because PF proclaims to reject counterplans, but reform is clearly a counterplan. It just won’t be labelled as such. But every “there’s better ways to fix this than repeal” contention is just a counterplan in disguise, the same way “targeted student loan forgiveness” won a wreck of debates on the last topic (and was also clearly a counterplan).
There’s also some tricky case arguments available for the negative. First, many aff teams will just assume that Section 230 repeal results in content moderation. But as discussed right after the intro, that’s not necessarily the case. Rather than resulting in aggressive content moderation, companies might back off content moderation entirely. The neg can develop a solvency argument that companies will, like Cubby Inc. before Section, choose not to moderate user-generated content at all. Of course, the neg should think carefully about how this interacts with their offense. Do you still have a link to your offense if companies choose not to moderate content? I also think it’s not entirely black and white: whether, how, and how much companies moderate content doesn’t seem to me to be a question with one answer. Different platforms in different circumstances will adopt different approaches. The best teams will be able to win a story of how and why enforcement happens in the context of their offense (“companies have an incentive to enforce this way […], which means we have a strong link, and also means my opponent’s case is wrong”).
The “shield and sword” argument also allows some strong case arguments. Remember that Section 230 doesn’t just give internet platforms immunity for user-generated content, it gives them immunity for the ways they choose to voluntarily engage in content moderation. Repealing Section 230 removes the “sword” that allows companies to engage in voluntary content moderation. This is another avenue for the neg to argue that repeal decreases content moderation. There are also some very tricky arguments about what kinds of content moderation happens now. Smart neg teams might be able to win that voluntary content moderation addresses important issues and that Section 230 repeal might shift resources away from that.
There are also exceptionally good area-specific case turns for the negative. For example, the negative lit on misinformation is so good the negative could consider building their own misinformation contention rather than just link-turning affs that make the misinfo argument (like here and here). This is especially true for Section 230 misinformation arguments about public health (which became very common after COVID). The neg should definitely link turn public health misinformation arguments. Section 230 advocates make the very good argument that public health misinformation is “lawful but awful,” which means that even without 230, there’s no legal liability associated with medical misinformation. Vaccination conspiracy theories are toxic, but not illegal, which means companies would have no incentive to police that content if Section 230 were repealed. Negative authors go a step further and say that Section 230 repeal would actually decrease status quo policing of medical misinfo, because it would both cause a resource tradeoff and remove legal immunity for content moderation.
There’s also some very good democracy link turns. Some authors go so far as to say that Section 230 is a bedrock of international free speech and vital to political dissidents across the globe.
Wyoming Debate Roundup is dedicated to providing quality debate content to Wyoming and Rocky Mountain area high school debaters. We’re a resource for Wyoming debaters by Wyoming debate coaches.