' Political neutrality in content moderation compels private speech | MTTLR

Political neutrality in content moderation compels private speech

Lots of online life today takes place on social media platforms. These platforms have become a place for communication of all types of ideas. Platforms establish community guidelines and moderate content for a variety of reasons. Congress saw a problem with platforms becoming liable for user content when they moderated “bad” content that was established in case law, so they passed section 230. This protects platforms from liability for content provided by users, while also allowing good faith moderation without revoking that protection. This protection has allowed platforms to create their own terms of service and define what type of content a user can post. If content violates the terms of use or is otherwise objectionable, platforms can remove it without fear of becoming liable as publishers of content on their site, instead of leaving all content untouched out of fear of incurring liability.

Recently this section has come under fire. Specifically, because section 230 protects moderation that is not politically neutral on some of the biggest internet platforms. Several bills have been introduced to address this and mandate neutrality in moderation of content. The problem with this approach is that it will compel social media platforms to host content that they do not want to. Forcing a private company to do so violates their first amendment rights.

The first amendment protects freedom of speech in the U.S. but section 230 provides enhanced protections. Congress conferred a benefit to internet platforms in the form of liability protections. These protections allow platforms to operate without fear of overwhelming lawsuits because of user posted content. It also allows platforms the freedom to act or not when they are informed of questionable content instead of imposing a scienter requirement to act. Without these protections companies would have to respond to every complaint made and might just remove questionable content to avoid lawsuits. The consistency of 230 jurisprudence has developed a clear rule of when platforms are liable for their actions regarding user content. This also benefits platforms when they craft policies and choose how to respond to complaints.

The proposals addressing section 230 condition this benefit on political neutrality in content moderation. Platforms now enjoy the freedom to decide what content is associated with their services. They can set policies and make decisions about what content violates that policy on their own. This helps them protect their brand because even if they are not legally liable for user content, the public may view them negatively for some content that they host. Requiring platforms to show that their moderation is politically neutral shifts the burden on to them to preserve a benefit the government has already conferred to them. This is a point that Justice Douglas highlighted in his concurrence in Speiser v. Randall. In that case the Supreme Court stated that a loyalty pledge to earn a tax exemption that a citizen otherwise qualified for was unconstitutional. Justice Douglas in particular addressed the issue with the state assuming all citizens were disloyal and requiring the speech to overcome that assumption. In the case of section 230 protections the proposals remove an already granted protection and then shift the burden of showing neutrality to the platforms to regain liability protection. In a more recent case, the Supreme Court ruled that requiring a private entity to expressly oppose prostitution in order to receive funding violated the first amendment. The government conditioning benefits on speech can effectively compel private entities to speak how the government wants. Section 230 liability protection is what allowed internet platforms to become what they are today. A neutrality condition is not so much a choice between keeping protection or not, it is more like a choice between being neutral or not existing.

A big question is also created when platforms attempt to be politically neutral.  What does it mean to be politically neutral? Different observers will have different thoughts on what content is objectionable and what content is just political. This change removes the stability that 230 protections give to internet platforms and creates an amorphous standard in place of a known rule.

The rationale behind this neutrality requirement is that it protects the free speech rights of platform users. But, if changes to section 230 are supposed to protect users’ rights, those rights must be established first. The Supreme Court has said in Packingham v. North Carolina that “A fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more.” They go on to say that the modern internet is an important place for people to exercise their free speech. Although this may be true, the Supreme Court has not said that this requires the first amendment to be applied to internet platforms. The Supreme Court said in Manhattan Community Access Corporation v. Hallock private companies administering public access channels are not subject to first amendment requirements. In that case the court said that the result hinged on whether the private company was a state actor in its role of administering public access channels. They ruled that was not the case and the company could exclude users in ways that could potentially violate the first amendment. If providing public access channels is not a state action then it seems hosting user content on a social media website is even less so. Users of internet platforms are not entitled to first amendment protections from private companies just because the internet is where a lot of speech happens.

Proposed changes to section 230 try to enforce political neutrality in content moderation. These changes will remove established protections from internet platforms unless they engage in speech that they may disagree with. Users have no first amendment rights when on internet platforms run by private entities so the government should not limit the way private companies choose to moderate. Instead, these companies should be allowed to exercise their own first amendment rights in deciding what content to host.

* Alex Miller is an Associate Editor on the Michigan Technology Law Review.

Submit a Comment

Your email address will not be published. Required fields are marked *