<iframe height="0" src="https://www.googletagmanager.com/ns.html?id=GTM-WJQQ9JK" style="display: none; visibility: hidden;" width="0"></iframe>

The future of recommendation algorithms may be endangered

The opening bullets in a Supreme Court battle over digital platforms, terrorism, and Section 230 of the Communications Decency Act have been fired.

2022-12-01
The Supreme Court battle for Section 230 has begun

The opening bullets in a Supreme Court battle over digital platforms, terrorism, and Section 230 of the Communications Decency Act have been fired. Petitioners filed briefs in Gonzalez v. Google and Twitter v. Taamneh, two cases accusing platforms of supporting Islamic State assaults, on Tuesday and Wednesday.

The ultimate verdict of the court will decide the culpability of web providers for hosting unlawful behavior, particularly if they promote it through algorithmic recommendations.

Both cases were heard by the Supreme Court in October, one at the request of a family suing Google and the other as a preemptive defense submitted by Twitter. They are the latest in a long line of lawsuits asserting that websites are legally obligated to delete terrorist material.

The great majority of these lawsuits have been dismissed, typically due to Section 230, which protects businesses from responsibility for hosting unlawful information. However, the two petitions are in response to a more divided 2021 decision by the Ninth Circuit Court of Appeals, which dismissed two terrorism-related complaints while allowing a third to proceed.

Gonzalez v. Google alleges that Google intentionally displayed Islamic State propaganda that allegedly contributed to a 2015 Paris assault, so providing material support to an unlawful terrorist organization. However, while the lawsuit is technically about terrorist content, the essential issue is whether amplifying an unlawful post renders corporations liable for it. In addition to not removing Islamic State videos, the plaintiffs – the estate of a woman killed in the assault — claim that YouTube automatically suggested these films to others, disseminating them around the network.

Google claims that Section 230 protects it, but the plaintiffs say that the law's bounds are unclear. "[Section 230] does not contain explicit wording about suggestions, nor does it create a different legal norm controlling recommendations," they said in a legal filing Wednesday.

They're seeking the Supreme Court to rule that some recommendation algorithms and other metadata, such as hyperlinks produced for an uploaded video and alerts alerting viewers to that video, constitute a type of direct publishing. As a result, they expect that services will be held accountable for promoting it.

This poses a number of difficult problems, notably regarding the limitations of an automated suggestion. For example, an extreme form of such obligation would hold websites accountable for giving undesirable search results (which, like practically all computer operations, are powered by algorithms). The complaint attempts to soothe this anxiety by asserting that search results are significantly different since they give information that a user has specifically requested.

However, it is still an attempt to control an incredibly omnipresent element of modern social media — not only on massive sites like YouTube, and not just for terrorism-related content.

Meanwhile, Google has reacted angrily to charges that it is not doing enough to combat terrorism. "Over time, YouTube has invested in technology, staff, and policies to detect and delete extremist content." We collaborate with law enforcement, other platforms, and civil society on a regular basis to exchange intelligence and best practices.

Undermining Section 230 will make combating bad information more difficult, not simpler, making the internet less safe and less useful for all of us, according to spokesman José Castaeda in a statement.

Meanwhile, Twitter v. Taamneh will be a litmus test for Twitter's legal performance under new owner Elon Musk. The action is about a separate Islamic State attack in Turkey, but it, too, is about whether Twitter offered material support to terrorists. Twitter filed its petition before Musk purchased the platform, hoping to fortify its legal defenses in the event that the court took up Gonzalez and ruled in favor of Google.

Twitter contends in its appeal that, regardless of Google's resolution with Section 230, failing to prohibit terrorists from utilizing a platform for general-purpose services is not a breach of the anti-terrorism law. According to Twitter, "it is far from apparent what a supplier of regular services can do to prevent terrorist responsibility" under that paradigm - a lawsuit could always contend the site should have worked more to catch criminals.

There is no definitive schedule for the cases at this moment, but further information will be released in the coming months; Google, for its part, has until January 12th to file a rebuttal brief. And the Supreme Court is probably set to hear other Section 230-related issues in the coming years, including a ruling on Texas and Florida statutes prohibiting social media moderation.

Empower your team with our App Marketing Intelligence

Free forever. Cancel anytime.

All content, layout and frame code of all Appranking blog sections belong to the original content and technical team, all reproduction and references need to indicate the source and link in the obvious position, otherwise legal responsibility will be pursued.