Courts Continue to Affirm Embattled Section 230
SAN FRANCISCO — A California appellate court recently ruled that Twitter — as a digital platform protected under the Section 230 third-party liability shield — has the right to ban users who violate their terms of service. While this ruling was reached at a state court, it affirms the legal sentiment that Section 230 encourages platforms to self-regulate.
A feminist Canadian writer and cultural commentator, Meghan Murphy, sued Twitter after the social network de-platformed her for posting transphobic tweets. As The Mercury News reported, Murphy lost access to her account for tweeting remarks that “men aren’t women” and posts Twitter and several people deemed harmful. Twitter argued that the platform is protected by Section 230 and acted to self-regulate its platform.
A 42-page opinion issued by the First District Appellate Court affirmed that Murphy’s evidence is either immaterial or irrelevant to the ruling. The court adds that Section 230 offers legal protection for digital platforms and is directly responsible for the speed at which the internet economy matured and modernized.
For this case, the court sided with Twitter because the higher courts have consistently upheld a social media platform’s legal ability to ban users for the utterance of certain speech, when it is in violation of the platform’s terms of service. Additionally, counsel for Murphy argued that Twitter’s mission statement to “give everyone the power to create and share ideas and information instantly without barriers” was a legal promise. Nevertheless, that did not amount to a legal contract because Twitter maintains the right to enforce its hateful content policy in a way that it sees fit.
Associate Justice Sandra L. Margulies wrote in the panel’s unanimous ruling:
“Because Murphy has not alleged Twitter ever made a specific representation directly to her or others that they would not remove content from their platform or deny access to their accounts, but rather expressly reserved the right to remove content, including content they determine is harassing or intolerable, and suspend or terminate accounts ‘for any or no reason’ in its terms of service, Murphy cannot plead reasonable reliance on the alleged promises as a matter of law.”
This ruling also emulates a variety of others that affirm Section 230 liability shields for platforms. A lawsuit filed with the San Jose Division in the U.S. District Court for the District of Northern California accused Google of permitting child gambling through the avenue of “loot boxes” available in several games sold and marketed on the Google Play Store platform.
The plaintiffs, a suing class comprising of concerned parents and other stakeholders, claim that Google permitted children to gamble via in-game purchases and loot boxes in the Play Store. For you noobs, the class action lawsuit requested court action to determine whether these micro pay mechanics constitute illegal gambling.
Since you use real money for digital currency and in-game benefits received, at many times, randomly, the argument goes that such functions somehow mimic unlawful gambling targeting children. Despite loot boxes being used regularly in the video game industry, the class action alleges that this is a covert attempt to drain families of more dollars.
The court ruled with Google, however, to dismiss the federal class-action lawsuit under the safe harbor provided by Section 230. Counsel for Google said that the company is protected from liability because the micro pay mechanics are conditional on the publishers and developers of the apps that use the Google Play Store as a distribution channel. The plaintiffs argued that loot boxes are prohibited under California’s gambling laws, but this premise was found to lack further merit and was dismissed. Any argument from the plaintiff class is ultimately moot, with no substantial case law demonstrating justification for the suing class’s gambling claims.
The order to dismiss, issued by Judge Beth Labson Freeman of the Northern District of California, found that:
Google “operates the Google Play store from which software applications (‘apps’), including video games containing Loot Boxes, may be downloaded. Google does not create the video game apps or Loot Boxes. Plaintiffs nonetheless allege that Google violates state consumer protection laws by offering video games containing Loot Boxes in its Google Play store and profiting from in-app purchase of Loot Boxes.”
Thus, any action questioning the so-called gambling mechanics in these apps should be directly submitted to the publishers or developers.
As a result of the dismissal, the court found that:
“Google cannot be held liable for merely allowing video game developers to provide apps to users through the Google Play store, as ‘providing third parties with neutral tools to create web content is considered to be squarely within the protections of § 230.’”
In plain English, this means Google — serving as the platform in which third-party activity occurs — is protected under Section 230. One more legal ruling further reaffirms this sentiment.
A state judge in Delaware dismissed a defamation lawsuit filed by former Trump campaign operative Carter Page. Page sued Verizon Media and several of its current and former properties for defamation for various media reports that described his connections with an FBI investigation into Russian interference during the 2016 election. In the trial, a judge ruled earlier that Page failed to demonstrate evidence of defamation or falsified information written in the articles that outlets like Huffington Post published.
As a result, the lawsuit was thrown out. Seven of the articles in question were contributed by outside parties, thus holding Huffington Post and other media platforms shielded from undue liability under Section 230. An Associated Press report noted the legal holding, that the liability shield under Section 230 preempts Delaware law and Page’s lawsuit, isn’t controversial by any means.
With regards to this ruling, the state courts similarly favored existing federal case law and U.S. code. Considering all three of these cases, the sentiment is that Section 230 provides a viable safe harbor that applies to more than just NSFW content and social media.
As I’ve previously noted, the bipartisan opposition to Section 230 in its current form stems from — what I view — a misunderstanding of the law’s intention and practice. Take the SAFE TECH Act, for instance. This proposal, introduced by Senate Democrats, assumes that the power of Section 230 undermines efforts to conduct antitrust enforcement actions and to enforce aspects of criminal law. But that line of reasoning is unnecessarily biased and places in the mind of average Americans that all firms that rely on the internet economy are sketchy, shadowy enterprises with no face or humanity.
The cases discussed here show that, concerning the courts, Section 230 is applied to hold third-party speakers accountable for their actions — not the platforms where those speakers’ content is published and distributed.
If a user on a platform was caught distributing illicit content like images depicting unlawful sexual exploitation, that individual would be held liable for all associated civil and criminal penalties. The issue seems to be the assumption that all content creators on any given platform are bad actors. That’s not the case, and it’s marginalizing when anti-porn activists assume as much.
Gavel photo by Sora Shimazaki from Pexels