Tech Companies Want to Tackle Harassment in Gaming

But Riot, Microsoft, Intel, and others have made clear that they're doing it on their terms.
An illustration of a figure with headphones and a tear.
A 2020 Anti-Defamation League survey revealed that 81 percent of American adults experienced harassment in online multiplayer games, and the numbers go up for women and people of color.Illustration: Derek Abella

Competitive CounterStrike: Global Offensive player Adam Bahriz will probably kill you in-game. He’s so skilled that he landed a contract with Team Envy, an esports organization that’s home to some of North America’s highest-ranking competitive eSports players. Bahriz also just happens to be deaf and legally blind, with a condition known as HSAN 8.

“What do you guys want to do? Just bust out A? I can buy smoke,” Bahriz says. His teammates immediately jump in to mock him and shut him down. “You’re just gonna get blocked,” one of them says. “We know you’re trolling,” another says. “So annoying.” “You’re muted already.”

“OK, I won’t talk, sorry,” he says, resignedly.

Bahriz spends the rest of the game in silence and even starts crying, revealing the very real, potent effects that bullying has on gamers who experience it. It’s everything that’s wrong with toxic gaming culture, where insults are thrown freely, bullying happens regularly, and everything from racism, misogyny, homophobia, transphobia, ableism, and more is fair game. “This incident made me feel super depressed,” Bahriz tells me. “I simply want to have a fun time playing a game—but a speech impediment that is beyond my control makes it difficult.” Bahriz says that eventually the toxic teammates kicked him from the game, and although “most of the time people are toxic, it is rare to actually be kicked from the game. That’s why it was so upsetting. You can mute toxic people, but you cannot prevent your whole team ganging up to kick you for no reason other than a speech issue.” 

In 2017, a Twitch streamer, Nicole Smith, recorded the verbal abuse she received while playing Overwatch.

“Go back to the kitchen,” one teammate said. 
“This is the reason why girls should not do anything,” another chimed in.
“Can you actually go and die?”

Much like Bahriz, Smith was met with a barrage of insults, harassment, and, in her case, misogynistic comments. The abuse that Smith has to endure just to play video games is reminiscent of GamerGate, where women in gaming and games journalism (as well as anyone who spoke up to defend them) endured weeks, months, and in some cases years of harassment, including death threats, doxing, and stalking. This led to changes in the game industry’s response to online harassment, with some game developers and publishers rolling out their own initiatives to combat in-game toxicity, and widespread criticism of many of those same publishers and developers for waiting until people’s lives were in danger to take harassment seriously.

A 2020 Anti-Defamation League survey revealed that 81 percent of American adults experienced harassment in online multiplayer games, compared to 74 percent in 2019, while 70 percent were called offensive names in online multiplayer games, and 60 percent were targets of trolling or “deliberate and malicious attempts to provoke [other gamers] to react negatively.” Overall, there was a 7 percent increase from 2019 to 2020.

For Bahriz, he no longer receives as much abuse as he used to, but when he does, he usually just mutes them and tries his best “to not let the toxicity distract mentally from the game,” he says. For others, however, simply muting doesn’t work, if it’s even available in the game they’re playing. In 2019 another ADL survey found that 22 percent of American adults who were harassed in online multiplayer games stopped playing certain games altogether because of the harassment.

Game Developers Want to Fight Back but on Their Terms

In 2017, Activision Blizzard, Epic, Intel, Microsoft, Twitch, and over 200 other companies formed the Fair Play Alliance to, as its website says, “encourage fair play and healthy communities.” In 2018, Blizzard publicly named 180 Overwatch players banned for toxic behavior, including being abusive in audio chats and deliberately throwing games. Not bad for a game that didn’t even have the option to report abusive players upon its 2016 release. In 2019, Ubisoft issued instant half-hour bans for Rainbow Six Siege players if the company detected slurs in the text chat. Ubisoft's code of conduct says this includes “any language or content deemed illegal, dangerous, threatening, abusive, obscene, vulgar, defamatory, hateful, racist, sexist, ethically offensive or constituting harassment.” Also that year, Electronic Arts established a Players Council with an inaugural summit at Gamescom in Cologne, Germany.

Riot Games, a company that’s been in the news both for toxicity internally as well as toxicity in its games, is also working to address the issue. In 2012, it introduced the Tribunal System in League of Legends, where players received temporary bans based on their actions and offenses that were deemed unacceptable by other players. (The Tribunal System no longer exists.) In 2016, it published a report in Scientific American that concluded that, based on its study of toxicity, adding in-game tips (among other things) decreased in-game toxicity by 25 percent both in players being abusive in lobbies and in matches containing abuse. Even as recently as April 2021, Riot changed its privacy policy to allow for the capture and evaluation of a player's voice communications when a report has been submitted about their behavior, with the goal of cutting down on toxicity in voice comms as well as in-game chat.

Weszt Hart, Riot’s head of player dynamics, tells me that their aim is to “reduce player pain.” He continues, “There should be minimal disruption. You should be able to just focus on the game and, hopefully, achieve the intent that we had, and that players have, you know, by coming together.” With regards to the tech behind this new voice-communication moderation strategy, Hart notes that there are “different approaches that currently exist for being able to take the audio, whether that’s text-to-speech, or potentially to actually understand the sentiment of what’s being said and making some actions on it. We’re letting the technologies prove themselves a bit. We’re looking at all the possibilities, and we're narrowing down what we think is going to be the best approach, or approaches, because there's no silver bullet.”

With regards to storing the data, Hart says that once Riot is satisfied with the best way to capture the voice communication data, it will be “the fastest way to take these reports,” he says. The company will “process the audio, make good decisions, and then decide what we’re going to do from there.” So what happens to the data after the report is made? “We want to hold that for as little time as possible," Hart says, "really just enough for us to be able to review it and then take appropriate action. Once that’s done, we want to delete it. It makes sense that the data should be stored regionally.”

While much of the responsibility for dealing with toxicity rests with the game publishers and developers that host the platforms where the toxic behavior takes place, they’re not the only companies trying to do something about it.

Big Tech Wades In With Clumsy AI-Powered Solutions

At last month’s 2021 Game Developers Conference, Intel unveiled Bleep. The program, with AI-powered speech-recognition technology, is designed to fight back against in-game toxicity by detecting and redacting audio based on user preferences. It acts as another layer of audio moderation on top of what a platform or service offers, and it's user-controlled. Interestingly, it is controlled via toggle and slider features, allowing users to decide at what levels they choose to hear hate speech, for example. It covers a wide array of categories, such as “aggression, misogyny, LGTBQ+ hate, racism and xenophobia, white nationalism” and more, according to the Verge’s preview with the technology. Notably, the N-word is a toggle on/off setting. 

While the intention seems admirable, more than a few people criticized the rather clunky way a user can opt in for some levels of misogyny, as opposed to none. The idea that you can open an app, then turn off all hate speech except ableism, for example, is counterintuitive. What is the difference between some and most white nationalism? There are also questions around how exactly the tech works on a user’s computer. Does it start bleeping after a certain number of slurs? Does it favor bleeping out slurs about a particular group more than others? Is there a hierarchy in terms of which groups the tech is most sensitive to?

Teaming with Spirit AI, a company that specializes in AI speech recognition, Intel has been working on Bleep for at least the past two years. But online harassment can be hard to moderate, especially in audio and in real time. Even the most sophisticated speech-recognition technology can fail to pick up nuanced toxicity, and although it can identify straightforward slurs, audio abuse isn’t just slurs and bad words. Just look at the abuse Bahriz has to deal with.

“The issue with Bleep is that it is very much focused around specific platforms, and it's not going to be available on the majority of platforms. So I think this is part of the problem,” says Nigel Cannings, CTO of Intelligent Voice, a company that specializes in speech recognition and biometric technology. He’s referring to the fact that Bleep, when it is available, will likely run only on Intel-powered systems, which don’t represent the majority of platforms on which people play video games. When it comes to using sliders and toggles to select “levels” of abuse or toxicity, Nigel agrees that it’s problematic, to say the least. “The idea that you can have levels of misogyny is just insane to me. You either censor the stuff out or not—you don't give people levels.”

“What I assume is that, if you turn all of the dials up, it probably over-filters,” he says. “The most important thing to consider with speech-recognition technology is that it is not 100 percent accurate. With video games, as people get more and more excited, certain words become harder to detect.” There are several factors that can cause speech-recognition technology to fail. Accents, levels of excitement, microphone modulation, and more all complicate the issue. Hart adds another complication: “There's understanding all of the languages, but then there's understanding how people speak, how gamers speak, and what are the terms that they use? They're not universal. And when those terms get used, they can dramatically change. A lot of gamers may know that GG is easy, right? And if you say GG, easy at the beginning of the game, it's funny, right? Because no one's played yet. And it's kind of a joke, right? I would take it funny. Others might not. But the end of the game. If you lost, that might be funny. Right? If you won, that is pretty rude. And so for the technology to not just transcribe but to help us be able to take action is an additional challenge on top of everything else.”

Another key drawback to a technology like Bleep, according to Cannings, is that it perpetuates the idea that it is a gamer’s responsibility to deal with abuse as opposed to the companies who own and manage the platforms where the abuse is taking place. He argues that it is pushing the problem of abuse and toxicity to the client side and notes that one major reason big tech companies don’t want to store the data isn’t because of privacy but because, as he says, “it's incredibly expensive. That's why companies don't want this responsibility. If you can imagine being on a multiplayer game, and every single audio stream has to go through speech-recognition technology, and then you probably have to start storing the profiles, in case you end up questioning people, asking is this a new person or are they adopting new identities in different games?”

In contrast to Cannings, Hart argues that “the idea that one can have a level of control over what you're hearing I find very compelling, because it puts some power into the hands of the individual. Ultimately, you need some agency over your experience as a player.” But he’s still cautious, saying that technologies like Bleep can have an impact, but they should only be part of a broader solution.

But that’s all academic. Would players even use a technology like Bleep? Bahriz says that despite the abuse he gets, he wouldn’t use it. He argues that people who want to be “a nuisance can probably find their way around” technology like Bleep, and even around moderation efforts like Riot’s audio capture. If normal players are confused by the technology at best, concerned by its limitations at worst, and many won’t use it even if it’s available, it’s clear that the gaming industry needs to adopt different—and perhaps additional—approaches to dealing with toxicity in gaming.


More Great WIRED Stories