ISIS is Not a Social Media Platform for Terrorists, and the Second Amendment is an Exceptional Fourth Amendment. The Ninth Circuit Revival of the Gonzalez Family
The argument was rejected last year by the US Court of Appeals for the Ninth Circuit. Yet the court was not enthusiastic in ruling against the Gonzalez family, with Judge Morgan Christen writing for the majority that despite its ruling, “we agree the Internet has grown into a sophisticated and powerful global engine the drafters of § 230 could not have foreseen.” And the court was not unanimous, with Judge Ronald Gould asserting that Section 230 does not immunize Google because its amplification of ISIS videos contributed to the group’s message (Section 230 does not apply if the platform even partly takes part in the development of content). “In short, I do not believe that Section 230 wholly immunizes a social media company’s role as a channel of communication for terrorists in their recruiting campaigns and as an intensifier of the violent and hatred-filled messages they convey,” Gould wrote. The Supreme Court this year agreed to review the case after the Ninth Circuit ruled against the Gonzalez family.
In the case, there is a question of whether a lawsuit can be filed against the company for promotion of terrorist videos on its platform.
“Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” lawyers for the family argued in their petition seeking Supreme Court review.
A part of the law that Ron Wyden is one of the architects of is Section 230 which protects against bad-faith lawsuits.
Section 230 was part of a 1996 US telecommunications law change. The House of Representatives added Section 230 to its telecommunications bill largely in response to two developments. Penalties were put into the telecommunications bill for the transmission of indecent content. Section 230 was promoted as the alternative to the Senate’s censorious approach, which ended up in the bill that Clinton signed into law. The Supreme Court would rule against the Senate portion next year.
The First Amendment is one of America’s most distinctive pieces of law. Free speech is one of the most delicate parts of American life as the presidential elections approach. Both sides of the political aisle are attacking long-accepted principles of speech law, frequently in ways that are both logically incoherent and deeply concerning.
Tech freedom advocates have fought for years against laws that would stifle online communication, a project based on the assumption that this communication is a social good. The backlash threatens to make things even worse, because the limits of this assumption have never been clearer.
For years, much of the criticism of Section 230 has come from conservatives who say that the law lets social media platforms suppress right-leaning views for political reasons.
The First Amendment as a Tort against Corporate Power: What Has It Meantened to Stop the World from Talking About Scientific Phenomenology?
No provider or user of a computer service will be treated as the publisher or speaker of an information that is provided by someone else.
The law was passed in 1996, and courts have interpreted it since then. It means that web services, as well as newspapers, gossip bloggers, listserv operators, and other parties can not be sued for hosting or reposting illegal speech. The law was passed after a pair of seemingly contradictory defamation cases, but it’s been found to cover everything from harassment to gun sales. In addition, it means courts can dismiss most lawsuits over web platform moderation, particularly since there’s a second clause protecting the removal of “objectionable” content.
I think it is weird to be in the position of arguing against limits on corporate power. Facebook, TikTok, Twitter, and other companies all play a huge role in public discourse and exercise a huge amount of influence over how Americans can connect with each other. It’s getting harder and harder to talk to other people in a way that’s not monitored and approved by an increasingly small number of companies.
But making false claims about pandemic science isn’t necessarily illegal, so repealing Section 230 wouldn’t suddenly make companies remove misinformation. There’s a good reason why the First Amendment protects shaky scientific claims. Imagine if news outlets were sued for publishing assumptions that were later proved to be incorrect, like covid not being airborne, because of how frequently our understanding of covid shifted.
Section 230 protections are used as a ruse to get around the First Amendment. Without 230, the cost of operating a social media site in the US will go up. Unable to invoke a straightforward 230 defense, sites could face protracted lawsuits over even unambiguously legal content. Even if they did win in court, web platforms would still be incentivized to remove posts that are illegal, like restaurant reviews or allegations of sexual harassment. All of this would burn time and money in perhaps existential ways. It was no wonder platform operators kept 230 alive. The platforms respond to politicians who complain.
The Joke of Sandy Hook: How Bad Was the First Amendment After Depp and Depp Accused of Bias in Social Media?
It’s also not clear whether it matters. Sandy Hook families were left struggling to chase down much of Jones’ money, after he declared corporate insolvency during the procedure. He treated the court proceedings contemptuously and used them to hawk dubious health supplements to his followers. The legal system has yet to meaningfully change his behavior, despite the damage caused by legal fees and damages. If anything, it provided yet another platform for him to declare himself a martyr.
Contrast this with the year’s other big defamation case: Johnny Depp’s lawsuit against Amber Heard, who had identified publicly as a victim of abuse (implicitly at the hands of Depp). Her case was more cut-and-dried than Jones, but she lacked some of his skills. The case turned into a ritual public humiliation of Heard because of the incentives of social media and because courts failed to respond to the way that things like livestreams contributed to the media circus. Defamation claims can meaningfully hurt people who have to maintain a reputation, while the worst offenders are already beyond shame.
“I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” said Rhode Island Democratic Sen. Sheldon Whitehouse at a hearing last week of the Senate Judiciary Committee. “The problem, where we bog down, is that we want 230-plus. We want to repeal 230 and then have ‘XYZ.’ And we don’t agree on what the ‘XYZ’ are.”
Republican-proposed speech reforms are ludicrously, bizarrely bad. We learned just how bad it was after Republican Legislatures in Texas and Florida tried to ban social media moderation due to the fact that they were using it to ban conservative politicians from posting on the site.
As it stands, the First Amendment should almost certainly render these bans unconstitutional. They are government speech regulations! While the Florida law was blocked by an appeals court in Texas, the Fourth Circuit Court of Appeals decided to uphold its law without explaining its reasoning. Months later, that court actually published its opinion, which legal commentator Ken White called “the most angrily incoherent First Amendment decision I think I’ve ever read.”
The company had said last week that it had not changed its policies, but rather that it would rely more on de-amplification of violative tweets in the future. The post stated that it wasn’t freedom of reach but freedom of speech.
The First Amendment and the Law of Balk: A Real and Meaningful Tradeoff between Software, Privacy, and Humans at Scale. And Why Does It Matter?
The law was not put on hold by Thomas and two other conservative justices. (Liberal Justice Elena Kagan did, too, but some have interpreted her vote as a protest against the “shadow docket” where the ruling happened.)
A useful idiot would not support the laws in Texas and Florida. The rules are rigged in a way so as to punish political targets. They attack Big Tech platforms because they have power, ignoring the near-monopolies of other companies like internet service providers, who control the chokepoints letting anyone access those platforms. There is no saving a movement so intellectually bankrupt that it exempted media juggernaut Disney from speech laws because of its spending power in Florida, then subsequently proposed blowing up the entire copyright system to punish the company for stepping out of line.
Many politicians are trying to ban children from finding media that is trans, gay, or gender non conforming, even though they rant about tech platform censorship. A Republican state delegate in Virginia brought an obscenity law to stop Barnes & Noble from selling a graphic memoir about lesbians and a young adult novel about the female genitalia in a victory for freedom of expression. There is a panic over “grooming” that impacts all Americans. Even as Texas is trying to stop Facebook from kicking off violent insurrectionists, it’s suing Netflix for distributing the Cannes-screened film Cuties under a constitutionally dubious law against “child erotica.”
But once again, there’s a real and meaningful tradeoff here: if you take the First Amendment at its broadest possible reading, virtually all software code is speech, leaving software-based services impossible to regulate. Airbnb and Amazon have both used Section 230 to defend against claims of providing faulty physical goods and services, an approach that hasn’t always worked but that remains open for companies whose core services have little to do with speech, just software.
It’s obvious that the Law of Balk is an oversimplification. Internet platforms change us — they incentivize specific kinds of posts, subjects, linguistic quirks, and interpersonal dynamics. But still, the internet is humanity at scale, crammed into spaces owned by a few powerful companies. Humans at scale can be ugly. Vicious abuse and threats of terrorism are not considered to be viable legal cases because they are not rising to the level of a viable legal case.
When billionaire businessman Musk said that he’d freed the bird last week, Felix Ndahinda feared a threat was on the horizon.
Even so, Ndahinda expects that Musk’s pledges to reduce Twitter’s oversight of social-media posts would add to the momentum and influence of hate speech in the Great Lakes and beyond. A permissive culture where anything goes will always add to the trends. “It will embolden actors and increase the virulence in their hate speech.”
Twitter’s policies to restrict hate speech and misinformation about certain topics — such as COVID-19 — reduce the chances that such tweets will be amplified, so loosening those policies would allow them to find larger audiences.
The release of internal documents from Twitter’s prior leadership comes as Musk attempts to reshape the platform in his image. The billionaire has previously said he wants to do away with permanent user bans and Twitter has recently begun to restore the accounts of thousands of users, including some incendiary figures. Musk has said he does not want the service to become a free for all, but will moderate some content in a way that is consistent with the previous policies.
Normally, these platforms are where false narratives start, says Stringhini. When those narratives creep onto mainstream platforms such as Twitter or Facebook, they explode. They become out of control because they are seen and covered by the media.
James Piazza, who studies terrorism at Pennsylvania State University in University Park, is worried that people with a public stature on social media use speech that dehumanizes people. There is a situation where you can have more violence.
And regulations on the way from the European Union could make Musk’s ‘free speech’ rhetoric impractical as well, says Rebekah Tromble, a political scientist at George Washington University in Washington DC. The Digital Services Act will require social-media companies to protect themselves from risks caused by illegal content and misinformation. It would be difficult for platforms to create their own policies and practices for Europe according to Tromble. When the system is a fundamental one, the risk of introducing measures will affect the system as a whole.
Tromble expects a period of chaos as Musk and users try to figure out what is acceptable on the platform. Then, she says, it is likely to settle down into a system much like the Twitter of old.
The new owner says he plans to make it possible for users to determine if the company restricts how many other users can view their posts. Musk is taking on an issue that has become a rallying cry for conservatives who claim that the social network has suppressed or shadowbanned their content.
The software update will show your actual account status, if you were shadowbanned, why, and how to appeal. He did not provide additional details or a timetable.
His announcement came amid a new release of internal Twitter documents on Thursday, sanctioned and cheered by Musk, that once again placed a spotlight on the practice of limiting the reach of certain, potentially harmful content — a common practice in the industry that Musk himself has seemingly both endorsed and criticized.
The second set of the so-called Twitter Files, shared by journalist Bari Weiss on Twitter, focused on how the company has restricted the reach of certain accounts, tweets or topics that it deems potentially harmful, including by limiting their ability to appear in the search or trending sections of the platform.
Weiss claimed that all actions were taken without the knowledge of the users. But Twitter has long been transparent about the fact that it may limit certain content that violates its policies and, in some cases, may apply “strikes” that correspond with suspensions for accounts that break its rules. In the case of a strike, users get a notification that their accounts have been temporarily suspended.
Fanned the flames of war: Facebook should stop using Facebook to censor hate speech, as Musk pointed out in a lawsuit against Meta in Kenya
In both cases, the internal documents appear to have been provided directly to the journalists by Musk’s team. Musk posted a thread with Weiss which he captioned, “The Twitter Files, Part Duex!!” Along with popcorn and a bunch of things.
Weiss had several examples of right-leaning figures who had moderation actions taken on their accounts, but it is not clear if such actions were equally taken against left-leaning or other accounts.
Meareg Amare, a chemistry professor at Bahir Dar University in Ethiopia was shot and killed on November 3, 2021. Amare, who was ethnically Tigrayan, had been targeted in a series of Facebook posts the month before, alleging that he had stolen equipment from the university, sold it, and used the proceeds to buy property. In the comments, people called for his death. Abrham Amare was denied his appeal to have the posts removed by Facebook for weeks. Abrham received a response from Facebook that one of the posts targeting his dad, shared by a Page with over 50,000 followers, had been removed.
Today, Abrham, as well as fellow researchers and Amnesty International legal adviser Fisseha Tekle, filed a lawsuit against Meta in Kenya, alleging that the company has allowed hate speech to run rampant on the platform, causing widespread violence. The suit calls for the company to have more moderation staff and de-prioritize content that is offensive.
“Facebook can no longer be allowed to prioritize profit at the expense of our communities. Like the radio in Rwanda, Facebook has fanned the flames of war in Ethiopia,” says Rosa Curling, director of Foxglove, a UK-based nonprofit that tackles human rights abuses by global technology giants. The organization is supporting the petition. The company has a number of clear ways to ensure the work they do is safe and fair, including hiring more local staff and adjusting their software to make it less likely that it will be used for viral hate.
Digital and Civil Rights Defenses of Prime Minister Abiy Ahmed During the 2020 Eritrea Civil War: Implications for Twitter and Free Press
Ethiopia has been involved in civil war since 2020. Prime Minister Abiy Ahmed responded to attacks on federal military bases by sending troops into Tigray, a region in the country’s north that borders neighboring Eritrea. A report released in April by Human Rights Watch and Amnesty found substantial evidence of crimes against humanity and a campaign of ethnic cleansing against ethnic Tigrayans in Ethiopia.
The Supreme Court is set to hear back-to-back oral arguments this week in two cases that could significantly reshape online speech and content moderation.
Editor’s Note: Nora Benavidez is the senior counsel and director of digital justice and civil rights at Free Press, a media and technology justice advocacy organization. The Free Press is a founding member of the coalition. The opinions expressed in this commentary are her own. View more opinion on CNN.
It would not have been surprising to many people that more than 50% of those who answered yes to Musk’s suggestion that he should step down as head of Twitter did so in a poll.
We at Free Press agree that Musk must step aside. The health and safety of the users of this social media platform is the main concern of his replacement, who needs to be someone who knows how important that is.
neo-Nazis, right-wing activists and other figures who have spread hatred to millions of followers has been given the go-ahead by his amnesty.
With regard to reversals, Twitter’s potential new leadership needs to undo its decision to allow Covid-19 misinformation and disinformation to spread unchecked across the social network. They need to retire Twitter’s pay-to-play blue checkmark feature, which allows verified users to post longer videos and have their content prioritized at the top of replies, mentions and searches. And they must cease Musk’s “general amnesty” plan on accounts that were suspended before he took over.
Why Should Websites Adopt a Section 230 Rule? The Tech Critics and the High Court of the U.S. Supreme Court
Tech companies involved in the litigation have cited the 27-year-old statute as part of an argument for why they shouldn’t have to face lawsuits alleging they gave knowing, substantial assistance to terrorist acts by hosting or algorithmically recommending terrorist content.
The central provision of the law is that websites and their users can’t be considered publishers or speakers of other people’s content. In plain English, that means that any legal responsibility attached to publishing a given piece of content ends with the person or entity that created it, not the platforms on which the content is shared or the users who re-share it.
The executive order faced a number of legal and procedural problems, not least of which was the fact that the FCC is not part of the judicial branch; that it does not regulate social media or content moderation decisions; and that it is an independent agency that, by law, does not take direction from the White House.
Even if there are not clear reasons why Section 230 is flawed or what policies should take its place, the two parties will still hate it.
The deadlock has thrown much of the momentum for changing Section 230 to the courts — most notably, the US Supreme Court, which now has an opportunity this term to dictate how far the law extends.
Tech critics have called for added legal exposure and accountability. “The massive social media industry has grown up largely shielded from the courts and the normal development of a body of law. It is highly irregular for a global industry that wields staggering influence to be protected from judicial inquiry,” wrote the Anti-Defamation League in a Supreme Court brief.
It would undermine what has allowed the internet to flourish, and it would be bad for the tech giants. It would potentially put many websites and users into unwitting and abrupt legal jeopardy, they say, and it would dramatically change how some websites operate in order to avoid liability.
Source: https://www.cnn.com/2023/02/18/tech/section-230-explainer/index.html
The Supreme Court’s High-Dimensional Position on Reddit: Tech Platforms, Social Media Companies, and the Biden Administration
“‘Recommendations’ are the very thing that make Reddit a vibrant place,” wrote the company and several volunteer Reddit moderators. “It is users who upvote and downvote content, and thereby determine which posts gain prominence and which fade into obscurity.”
People would stop using Reddit, and moderators would stop volunteering, the brief argued, under a legal regime that “carries a serious risk of being sued for ‘recommending’ a defamatory or otherwise tortious post that was created by someone else.”
The outcome of the oral arguments, scheduled for Tuesday and Wednesday, could determine whether tech platforms and social media companies can be sued for recommending content to their users or for supporting acts of international terrorism by hosting terrorist content. The Court has never reviewed a hot-button federal law that protects websites from lawsuits over user-generated content.
Tech platforms running their services could be exposed to more liability due to the allegation that seeks to carve out content recommendations so that they don’t receive protections under Section 230.
The Biden administration has also weighed in on the case. In a brief filed in December, it argued that Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.” But, the government’s brief argued, those protections do not extend to Google’s algorithms because they represent the company’s own speech, not that of others.
The company said that even though the terrorist group used their platform to promote themselves, it was not a cause for them to be held responsible under the antiterror law. The Biden administration, in its brief, has agreed with that view.
The Biden Administration and the Texas and Florida Appellate Divisions in the U.S. District Court of Appeals for a Class of Diffusive Particles
A number of petitions are currently pending asking the Court to review the Texas law and a similar law passed by Florida. The Court last month delayed a decision on whether to hear those cases, asking instead for the Biden administration to submit its views.