Discord is not about Lifetime Bans: Managing Privacy with Artificial Intelligence and the Use of Machine Learning to Detect Sexual Exploitation
I was able to sit in on a meeting with Redgrave, Badalich, and four other members of the trust and safety team. Can the warning system for individual users be adapted for the server?
When someone crosses the line repeatedly, Discord will try not to ban them forever. It will be banning them for a year, which is a huge reduction in their sentencing for an industry where lifetime bans are the norm.
Discord argues that it is not invasion of privacy due to the use of artificial intelligence. “It allows us to deploy more technology to identify problematic content, but it’s in a way that doesn’t feel like it’s a violation of privacy,” explains John Redgrave, Discord’s vice president of trust and safety, in an interview with The Verge. This is our way of saying that we are not going to intrude on people’s privacy and that we will give tools that enrich people’s experience from a safety perspective.
Two years ago Redgrave joined the platform after it acquired Sentropy, the company he co-founded to work with online harassment and abuse detection tools. Discord is planning to expand these AI models beyond just blurring. “It gives us a mechanism by which we can introduce additional models in the future that can protect against other forms of challenging content,” explains Redgrave, who says Discord is also working on a grooming model to detect sexual exploitation on its service.
Discord vs. Google: The Ups and Downs of the Social Justice System: A Comparison of New and Old Trends in Screening and Video Game Interactions
Apple had to drop some controversial child protection features last year, after privacy outcry, but was eventually able to introduce an opt-in feature in its Family Sharing setup. Apple can scan incoming and outgoing pictures for sexually explicit material to children’s accounts, blurring images if it detects something sexually explicit.
The search functionality on Discord mobile is also being improved soon with tappable search filters and an improved notifications tab with an auto-clear feature. This week, there is a new Remix feature that will allow you to modify images into other people’s work and then share it on Discord.
If you’re interested in avatar decorations and profile effects, Discord’s in-app store is arriving for all users soon. It offers a bunch of decorations for profiles so your avatar can have an animation over it or people can preview your profile and see effects. Nitro members already have access to the Discord Shop and get a discount on avatar decorations and profile effects.
Last but not least, Discord is also rolling out some improvements for apps and developers. The UK and Europe will now get the premium app subscriptions that were previously US only. Discord is also experimenting with making apps usable in more places throughout its main app and looking at ways to let Discord users use apps in a server without them having to be added by an admin.
Today, let’s talk about how the traditional platform justice system is seeing signs of a new reform movement. If it’s successful at Discord, its backers hope that the initiative could lead to better behavior around the web.
The San Francisco campus of Discord has a wellstocked micro-kitchens and employees that are very busy in and out of conference rooms.
This is a place that was built by gamblers and it is obvious when you step through the glass doors. Three employees were sitting in a row and playing a game that was a first-person shooter at the office on Wednesday.
Video games are intended for fun and the community around them can be very toxic. Angry gamers hurl slurs, doxx rivals, and in some of the most dangerous cases, summon SWAT teams to their targets’ homes.
Discord is a Digital Media Platform for Zero-Tolerance Violations of the Fourth Amendment (Gaussian Civil Liberation Law)
By now, the eight-year old Discord hosts more than just gaming discussions. More than 150 million monthly users were reported, and the biggest server now has dedicated ones for music, education, science, and artificial intelligence.
Along with the growing user base has come high-profile controversies over what users are doing on its servers. In April, the company made headlines when leaked classified information from the Pentagon was found on the platform. Discord faced previous scrutiny over its use in 2017 by white nationalists planning the “Unite the Right” rally in Charlottesville, VA, and later when the suspect in a racist mass shooting in Buffalo, NY was found to have uploaded racist screeds to the platform.
Most platforms have a three-strikes-and-you’re-out policy. Break the rules a couple times and you get a warning; break them a third time and your account is nuked. In many cases, strikes are forgiven after some period of time — 30 days, say, or 90. The nice thing about this policy from a tech company’s perspective is that it’s easy to communicate, and it “scales.” An automated system that issues strikes, reviews appeals, and banning accounts without any human oversight is possible.
One, a three-strikes policy isn’t proportionate. The same punishment is given for both minor and major violations. Two, it doesn’t rehabilitate. Most users who receive strikes probably don’t deserve to be permanently banned, but if you want them to stay you have to figure out how to educate them.
Three of the platform systems do not lack nuance. If a teenage girl posts a picture of self-injury, she will have her picture taken and removed from the website. But the girl doesn’t need to be banned from social media — she needs to be pointed toward resources that can help her.
It starts with a DM — Users who break the rules will receive an in-app message directly from Discord letting them know they received either a warning or a violation, based on the severity of what happened and whether or not Discord has taken action.
However, some violations are more serious than others, and we’ll take appropriate action depending on the severity of the violation. We will continue to have a zero-tolerance policy for violent extremism and content that sexualizes children in the future.
It’s a welcome acknowledgement of the importance of social networks in the lives of people online, particularly young people — and a rare embrace of the idea that most wayward users can be rehabilitated, if only someone would take the time to try.
The new system has been tested in a small group of server and is about to be rolled out. Along with the new warning system, the company is introducing a feature called Teen Safety Assist that is enabled by default for younger users. When switched on, it scans incoming messages from strangers for inappropriate content and blurs potentially sensitive images in direct messages.
Source: Inside Discord’s reform movement for banned users
Discussion on “Discord moderation reform rehabilitation users” at the Fermilab Hochschild-Thirring-Nordstr”om
I appreciated the chance to sit in on the meeting, which was on the record, since the company is still in the early stages of building a solution. As in most subjects related to content moderation, untangling the various equities involved can be very difficult.
Alright, that’s it. Perhaps moderators should be considered just as responsible for harms in a server as the owner? It turned out that Discord doesn’t have a consistent definition of who is an active moderator. Users who join a server will be given automatic permission to be in the moderation department. If the server goes rogue and the “moderator” has never posted in the server, why should they be held accountable?
It feels like an impossible knot to untangle. In the end, the team found a way to analyze server info along with the behavior of the admins and users of the server in order to rehabilitate them.
Source: Inside Discord’s reform movement for banned users
When trust and safety went wrong, what we did not know about a company that dealt with it, and what we can do to improve it
It wasn’t perfect — nothing in trust and safety ever is. One product policy specialist said the current system was fascinating and half-joking. “What we’re proposing is a somewhat different case of over- and under-enforcement.”
I was convinced that the company’s future systems would improve over time. Trust and safety teams are caricatured often as scolds and censors. They can be innovative, too, as evidenced by the welcome reminder that came from visiting Discord.