The terrible options for social media companies.


First Amendment violation violates Section 230: The family and estate of Nohemi Gonzalez, a California State University student killed during the 2015 Paris shooting

When Musk first announced his plans to take over Twitter, he pledged in an SEC filing to transform the platform to better serve “a functioning democracy.” His actions since have shown that he is killing off free speech and democracy in order to save it.

In the last 27 years, courts have interpreted Section 231 of the Communications Decency Act to protect online communities from being responsible for user content in a way that lays the groundwork for the business model of websites like Facebook, Glassdoor, and community bulletin boards.

It’s a Section 230 case like many others. The family and estate of Nohemi Gonzalez, a California State University student who was killed while studying abroad in Paris in 2015, have sued in order to be heard. The lawsuit, filed against Google, alleges that its subsidiary YouTube violated the Anti-Terrorism Act by providing substantial assistance to terrorists. The core of the dispute is not merely that there are videos on YouTube that are related to Islamic State, but also that there are videos that are not related to Islamic State. The lawsuit says that to recommend the videos to its users, it selected users based on their previous viewing habits on the site and how they would rate the videos. In other words, YouTube allegedly showed ISIS videos to those more likely to be radicalized.

A lot of politicians love the First Amendment. Many of them profess to hate another law: Section 230 of the Communications Decency Act. But the more they say about 230, the clearer it becomes that they actually hate the First Amendment and think Section 230 is just fine.

The First Amendment is a piece of law that stands out. As the elections approach, free speech has become one of the most pressing issues in America. Both sides of the political aisle are attacking long-accepted principles of speech law in ways that are irrational and deeply concerning.

Rather than seriously grappling with technology’s effects on democracy, many lawmakers and courts have channeled a cultural backlash against “Big Tech” into a series of glib sound bites and political warfare. There are a lot of mutually exclusive demands on the surface of internet regulation. Some of the people who are most vocal about defending the First Amendment are the ones who are most open to dismantling it.

The person who uses an interactive computer service or publisher of information will not be treated as a speaker or publisher.

The First Amendment to Free Speech in the Era of Pandemic Science: Not for the Laws but for the Scientific Community and the Legal System

But making false claims about pandemic science isn’t necessarily illegal, so repealing Section 230 wouldn’t suddenly make companies remove misinformation. There’s a good reason why the First Amendment protects shaky scientific claims. Think of how constantly our early understanding of covid shifted — and now imagine researchers and news outlets getting sued for publishing good-faith assumptions that were later proven incorrect, like covid not being airborne.

New York Attorney General Letitia James wants to ban the distribution of live videos of mass shooters because they glamorize the crime. Legal experts like Danielle Citron have also proposed fixing specific problems created by Section 230, like its de facto protections for small sites that solicit nonconsensual pornography or other illegal content. There are serious criticisms of these approaches, but they are attempts to address real legal tradeoffs.

The thing is, these complaints get a big thing right: in an era of unprecedented mass communication, it’s easier than ever to hurt people with illegal and legal speech. But the issue is far bigger and more complicated than encouraging more people to sue Facebook — because, in fact, the legal system has become part of the problem.

Source: https://www.theverge.com/23435358/first-amendment-free-speech-midterm-elections-courts-hypocrisy

Social Media Lobbying: How Bad are First Amendment Reforms After the Sandy Hook Marty Case? The Case of Johnny Depp, Amber Heard, and Facebook

It isn’t clear whether it matters. Sandy Hook families struggled to chase Jones’ money after Jones declared corporate bankruptcy, keeping much of his money indefinitely. He used the court proceedings to hawk health supplements to his followers. Legal fees and damages have almost certainly hurt his finances, but the legal system has conspicuously failed to meaningfully change his behavior. It provided another platform for him to declare himself a martyr.

Contrast this with the year’s other big defamation case: Johnny Depp’s lawsuit against Amber Heard, who had identified publicly as a victim of abuse (implicitly at the hands of Depp). Heard had a less cut-and-dried case than Jones, but she didn’t have the same social media savvy. The case turned into a ritual public humiliation of Heard — fueled partly by the incentives of social media but also by courts’ utter failure to respond to the way that things like livestreams contributed to the media circus. People who have to maintain a reputation can be harmed byDefamation claims, while the worst offenders are beyond shame.

Up until this point, I’ve almost exclusively addressed Democratic and bipartisan proposals to reform Section 230 because those at least have some shred of substance to them.

Republican-proposed speech reforms are ludicrously, bizarrely bad. We’ve learned just how bad over the past year, after Republican legislatures in Texas and Florida passed bills effectively banning social media moderation because Facebook and Twitter were using it to ban some posts from conservative politicians, among countless other pieces of content.

As it stands, the First Amendment should almost certainly render these bans unconstitutional. They are government speech regulations! While the Florida law was blocked by the appeals court, the Fourth Circuit Court of Appeals of Texas upheld it without explanation. The most angrily incoherent First Amendment decision I think I have ever read was published months later by that court.

In a blog post last week, Twitter said it had not changed its policies but that its approach to enforcement would rely heavily on de-amplification of violative tweets, something that Twitter already did, according to both the company’s previous statements and Weiss’ Friday tweets. “Freedom of speech,” the blog post stated, “not freedom of reach.”

The First Amendment is not a Speech. Tech Platforms Are Not Altrusive, But They Can Be Used to End Humans’ Perturbations

Three conservative justices, including Thomas, did not vote to put the law on hold. (Liberal Justice Elena Kagan did, too, but some have interpreted her vote as a protest against the “shadow docket” where the ruling happened.)

But only a useful idiot would support the laws in Texas and Florida on those grounds. Basic consistency is sacrificed when the rules are rigged to punish political targets. Internet service providers control the chokepoints allowing anyone to access the Big Tech platforms, but they are not attacked for their power. There is no saving a movement so intellectually bankrupt that it exempted media juggernaut Disney from speech laws because of its spending power in Florida, then subsequently proposed blowing up the entire copyright system to punish the company for stepping out of line.

Even as politicians rant about tech platform censorship, many of the same politicians are trying to block children from finding media that acknowledges the existence of trans, gay and gender non conforming people. On top of getting books pulled from schools and libraries, Republican state delegate in Virginia dug up a rarely used obscenity law to stop Barnes & Noble from selling the graphic memoir Gender Queer and the young adult novel A Court of Mist and Fury — a suit that, in a victory for a functional American court system, was thrown out earlier this year. A disingenuous panic over “grooming” doesn’t only affect LGBTQ Americans. Texas is trying to stop Facebook from kicking off violent insurrectionists, but it is suing for distribution of the film Cuties under a constitutionally dubious law against child erotica.

But once again, there’s a real and meaningful tradeoff here: if you take the First Amendment at its broadest possible reading, virtually all software code is speech, leaving software-based services impossible to regulate. Section 230 of the FTC Act has been used to defend against claims of providing faulty physical goods and services, an approach that has not always worked but remains open for companies with core services that do not have much to do with speech, just software.

It is obvious that Balk’s Law is an oversimplification. Internet platforms change us — they incentivize specific kinds of posts, subjects, linguistic quirks, and interpersonal dynamics. The internet is full of people, crammed into spaces that are owned by powerful companies. And it turns out humanity at scale can be unbelievably ugly. Vicious abuse could come from just one person or the abuse might be spread out into a campaign of lies, threats, and terrorism, but it would not rise to the level of a legal case.

Disturbing Twitter of “Free Speech” and “Hast” Comments on the Musk Manifesto, and its Files with Weiss

You know what your true account status is once a software update shows the reason why you were shadowbanned, according to Musk. He did not give any further information or a timetable.

His announcement came a day after Musk publicly supported the practice of limiting the reach of some potentially harmful content on his service.

Last month, Musk said Twitter’s “new” policy is “freedom of speech, not freedom of reach,” echoing an approach that is something of an industry standard. Negative or hate speeches will be deboosted and demonetized so no advertising or other revenue can be made from them.

Some conservatives accused Musk of continuing a practice they opposed, after he said that he now votes Republican. The clash shows an underlying tension with Musk promising more maximalist approaches to “free speech” but also trying to assure advertisers and users that there will not be content moderation guardrails.

The second set of the so-called Twitter Files, shared by journalist Bari Weiss on Twitter, focused on how the company has restricted the reach of certain accounts, tweets or topics that it deems potentially harmful, including by limiting their ability to appear in the search or trending sections of the platform.

Weiss suggested that such actions were taken “all without users’ knowledge.” But Twitter has long been transparent about the fact that it may limit certain content that violates its policies and, in some cases, may apply “strikes” that correspond with suspensions for accounts that break its rules. In the case of strikes, users get a notification that their accounts have been temporarily suspended.

The Weiss Files, Part Duex! – Elon Musk in the Twitter Files (and the Free Press’s Editor’s Note)

In both cases, the internal documents appear to have been provided directly to the journalists by Musk’s team. Musk on Friday shared Weiss’ thread in a tweet and added, “The Twitter Files, Part Duex!!” There are two popcorn symbols along with them.

Weiss offered several examples of right-leaning figures who had moderation actions taken on their accounts, but it’s not clear if such actions were equally taken against left-leaning or other accounts.

It is an editor’s note. The senior counsel and director of digital justice and civil rights at Free Press is Nora Benavidez. The Free Press is a founding member of the coalition. The opinions expressed in this commentary are her own. CNN has more opinion on it.

Last week, Twitter suspended the accounts of several journalists who had been closely covering its new owner, Elon Musk. Then, Musk offered to allow several of the blocked accounts to return if they agreed to delete tweets that he falsely claimed shared his location. The company suddenly reversed its decision to ban links to rival platforms over the weekend.

So when Musk asked his followers Sunday night in an unscientific poll whether he should step down as the head of Twitter, it likely wasn’t surprising to many that more than 57% of respondents answered “yes” – though that probably wasn’t the result Musk expected from a site that’s home to some of his most ardent fans.

We at Free Press agree that Musk must step aside. A replacement for the CEO of this social media platform needs to know at the most basic levels that it will succeed only when it puts the health and safety of its users first.

How can we protect ourselves from the spread of hate on Facebook? The role of Twitter and Facebook in the fight against Covid-19 and the failure of general amnesty

Neo-Nazis and right-wing activists, who have spread hatred to millions of followers, were given an opportunity to go back to their old ways after his pardoned accounts were taken off the books.

The potential new leadership of the social network needs to reverse their decision about allowing Covid-19 misinformation to spread. They need to retire Twitter’s pay-to-play blue checkmark feature, which allows verified users to post longer videos and have their content prioritized at the top of replies, mentions and searches. And they must cease Musk’s “general amnesty” plan on accounts that were suspended before he took over.

The former public policy director at Facebook is a fellow at the Bipartisan Policy Center. BPC accepts funds from some tech companies in order to get information about elections for their users. The views expressed in this commentary are the author’s. Read more opinion articles on CNN.

It is important to remember not to just tackle the problem by looking at the piece of content. Instead, a multi-pronged approach is needed looking not just at the content but also the behavior of people on the platform, how much reach content should get, and more options for users to take more control over what they see in their newsfeeds.

A platform needs to make sure that everyone has the right to free speech and safely express what they think. Every platform — even those that claim free expression is their number one value — must moderate content.

The Hateful Thing About Online Mobs: How to Stop, Reduce, and Inform on Facebook, and What to Do About It

Child pornography must be removed under the law. However, users — and advertisers — also don’t want some legal but horrible content in their feeds, such as spam or hate speech.

Moreover, no one likes when an online mob harasses them. All that will do is drive people away or silence them. It isn’t a true free speech platform. A recent example is the former head of trust and safety at Twitter who was forced to leave his home due to the number of threats he received. When users coordinate harassment online, other platforms have increased their efforts to shut it down.

Second, there are more options beyond leaving the content up or taking it down. Meta characterizes this as remove, reduce and inform; instead of taking potentially problematic, but not violating, content down, platforms can reduce the reach of that content and/or add informative labels to it to give a user more context.

This option is necessary as many of the most engaging posts are borderline — meaning they go right up to the line of the rules. The platform may not be comfortable removing clickbait, but it may want to take other actions because users and advertisers might not want to see it.

Source: https://www.cnn.com/2022/12/21/opinions/twitter-files-content-moderation-harbath/index.html

Where do we stand in the age of social media? The importance of platforms for speech and safety, and the importance of regulating the online presence of disturbing content

Some argue — as they did about one installment of the Twitter files — that the reduction in reach is a scandal. But others, such as Renee DiResta from the Stanford Internet Observatory, has famously written that free speech does not mean free reach.

There is a third point in this one: transparency. How are they ranking the priorities of those making these decisions? The issue around shadow banning — the term used by many to describe when content isn’t shown to as many people as it might otherwise be without the content creator knowing — isn’t just one person upset that their content is getting less reach.

They are angry that they don’t know what happened. Platforms need to do more on this front. For instance, Instagram recently announced that people could see on their accounts if they are eligible to be recommended to users. They have rules about accounts that share sexually explicit material, clickbait, and other types of content that are not allowed to be recommended to others who don’t follow them.

Lastly, platforms can give users more control over the types of moderation they are comfortable with. Francis Fukuyama is a political scientist. Middleware would allow people to choose what content they see in their feeds. This will allow them to determine what they need to feel safe online. Some platforms, such as Facebook, already allow people to switch from an algorithmically ranked feed to a chronological one.

It is very difficult to tackle the problem of speech and safety. Our society is in the middle of making sure that our speech is ok online. and how we hold people accountable.

This will be a whole-of-society approach and we need more information on how platforms make these decisions. Regulators, civil society and academic organizations outside these platforms need to be willing to say how they would make some of these difficult decisions, governments need to find ways to regulate platforms and we need more options to control the types of content we want to see.