
Let me ask you something specific.
Have you ever left an online match feeling genuinely worse than when you sat down? Not because you played badly. Not even because you lost. But because something happened in that game – a message, a voice line, a pattern of deliberate behaviour – that briefly made you feel like a lesser person?
If yes, you have company. A 2019 Anti-Defamation League study found that 74% of adult online gamers experienced some form of harassment during play. The Kidas research platform found in 2022 that in monitored gaming sessions involving minors, toxic incidents occurred in approximately 79% of sessions. These aren’t edge cases. They’re the expected background condition of online gaming.
Toxic behaviour in games is not primarily a cultural problem that the industry is trying to solve. It is, in significant part, a design outcome that the industry has business reasons to maintain. Understanding why requires looking past the public statements and into the systems.
Toxicity doesn’t emerge randomly. It concentrates in specific types of games under specific conditions. Understanding that pattern requires understanding what those games are designed to do.
Most competitive online games – League of Legends, Dota 2, Valorant, CS2, Overwatch – use matchmaking systems built around what’s called an Elo rating or MMR (Matchmaking Rating). The system’s goal is to match players of similar skill so that each match is approximately competitive. On paper: fair. In practice, this creates a specific psychological experience.
Dr. Jamie Madigan, author of Getting Gamers and researcher in games psychology, describes it this way: ELO-based systems are designed to produce roughly a 50% win rate regardless of player skill improvement. The better you get, the harder your opponents become. This means you are always at the edge of your competence, always facing maximum challenge, always one step from failure.
This is not accidental. Game designers call this the “flow state” – the psychological sweet spot between too-easy (boring) and too-hard (frustrating) where engagement is highest. Keeping players in constant near-failure produces maximum engagement time.
But there’s a physiological cost: extended time in high-stress, high-stakes competitive environments elevates cortisol. Players in ongoing competitive losing streaks show measurable stress responses comparable to real-world social threat situations – this is documented in research from the University of Gothenburg examining League of Legends players.
Chronically elevated stress, combined with anonymity and the social pressure of team games where one player’s performance affects others, creates conditions that reliably produce aggressive behaviour. This is not a bug in competitive game design. It is an emergent feature of systems optimized for engagement.
Here’s where the analysis becomes uncomfortable, because it requires engaging with data that major studios don’t publicize.
Multiple analyses of player spending behaviour in free-to-play competitive games – including research cited in a 2021 Harvard Business Review piece on game monetization – indicate that players experiencing competitive stress, including those who engage in toxic behaviour, convert to paying customers at higher rates than calm, satisfied players.
The proposed mechanism: players experiencing anger and frustration seek ways to regain a sense of control and competence. Purchasing a better character skin, a cosmetic upgrade, a rank boost – these represent accessible expressions of agency in a situation where the player feels powerless. The purchase doesn’t improve their skill, but it provides a momentary sense of action.
This pattern – stress driving impulsive purchasing – is well-documented in consumer psychology broadly. It’s called “retail therapy” when it applies to offline shopping; in gaming it’s simply the normal state of a large portion of a studio’s paying audience.
The implication is significant: if reducing toxicity and competitive stress also reduces spending, studios face a genuine tension between player welfare and revenue. There’s no public acknowledgment of this tension from major publishers. But the pattern of half-measures, announced “improvements,” and continued baseline toxicity levels across major competitive titles over fifteen years suggests that this tension is resolved consistently in favour of revenue.
It’s fair to acknowledge that studios have made genuine attempts to address toxic behaviour. It’s equally fair to note that these attempts have consistently been implemented at scales insufficient to alter the structural dynamics.
Riot’s Tribunal (2011–2014) was the most ambitious early attempt: a player-driven system where the community reviewed reports and voted on punishments. The democratic appeal was real, but the implementation was flawed. The system became vulnerable to targeted harassment campaigns – coordinated groups reporting players they disliked. It showed demographic biases, with certain play styles (aggressive, unconventional) generating reports at higher rates than their actual toxicity warranted. Riot shut it down in 2014.
Automated chat filters arrived at most major studios around 2015–2018. These successfully reduced specific slurs and profanity in text chat. Players adapted within weeks, using misspellings, letter substitutions, and coded language. The most toxic communication migrated to voice chat, which proved substantially harder to moderate automatically. Automated filters addressed the symptom most visible to regulators while leaving the underlying behaviour largely intact.
Valorant’s voice chat recording (2021) represented a significant step: Riot began recording voice chat for use in harassment reports, with appropriate privacy disclosures. This had measurable effects – some forms of verbal abuse declined in Valorant compared to earlier Riot titles. But it raised legitimate privacy concerns, and similar systems remain absent from most major titles.
Overwatch’s Endorsement System (2018) was an attempt at positive reinforcement: after matches, players could commend others for good sportsmanship, shot-calling, or positive attitude. Research on positive reinforcement suggests this approach has genuine potential. In practice, participation rates remained low, and the system incentivized endorsement-farming behaviour rather than genuine culture shift.
The pattern: interventions are real, genuine, and consistently insufficient in scale and structural ambition. None address the design conditions that produce toxicity in the first place.
Researchers in game design and behavioural science have identified specific interventions with evidence behind them. Most remain unimplemented at scale.
Transparent reporting outcomes. Studies in procedural justice – the psychology of fairness – consistently show that people comply with and trust systems more when those systems communicate their outcomes clearly. In most games, when you report a player, you receive no feedback. You don’t know if the report was reviewed. You don’t know if any action was taken. This helplessness corrodes trust in the moderation system and reduces future reporting.
If systems showed “Player X you reported received a 7-day suspension” – even without identifying information – reporting rates would increase and trust in the system would measurably improve. This is not technically complex. It’s a policy choice.
Communication design. Apex Legends and League of Legends both implemented ping systems – spatial markers that convey tactical information without requiring text or voice chat. Research on these systems found they substantially reduced the need for direct communication while maintaining coordination quality. Less direct communication means fewer opportunities for toxic exchange.
The ping system doesn’t eliminate toxicity, but it demonstrates that tactical communication doesn’t require voice chat. Studios could offer competitive modes with limited or no text/voice chat without sacrificing game quality. Few do, because voice chat is often pointed to as a community feature even when its net effect is negative.
Non-zero-sum design. A proportion of toxicity is structurally generated by zero-sum competition: one team’s win requires another team’s loss. Cooperative modes, modes with individual rather than team performance metrics, modes where loss isn’t binary – these reduce the conditions that produce anger and blame.
Fortnite’s various cooperative limited-time modes consistently show lower toxicity rates than its competitive battle royale. This is not coincidental.
Shorter match times. Research by Kidas found that toxic incidents in gaming increase significantly in sessions exceeding 30 minutes. Long match formats (Dota 2 matches regularly run 40–60 minutes, with losses that are clear twenty minutes before they’re official) accumulate frustration. Match structure is a design choice.
The industry’s standard framing of the toxicity problem is: we’re fighting bad actors, we’re improving, progress takes time. This framing positions the studios as victims of bad player behaviour rather than as designers of environments that produce and sustain it.
But research on who is most affected by gaming toxicity is clear, and it complicates the “bad actor” framing considerably.
A 2022 report from the Anti-Defamation League found that women, LGBTQ+ players, and players of colour experience harassment at rates 20–30% higher than the overall population of online gamers. They also reported stopping play or avoiding certain games at higher rates due to harassment. These groups are not leaving because they’re unwilling to deal with some inevitable background friction. They’re leaving because the environment has been functionally optimized against their participation.
Young players are particularly affected. The Kidas monitoring data – drawn from actual gameplay sessions involving minors – found that in sessions with toxic interactions, players under 18 showed measurable shifts toward defensive or withdrawn communication in subsequent sessions. Repeated exposure to aggression has documented effects on adolescent social development.
These are not metrics that show up in quarterly earnings calls. They show up in research published by academics and pediatric psychologists. The industry’s executives are aware this research exists. The structural response – making competitive games less stressful, reducing anonymity, implementing robust voice chat moderation – remains absent at scale because it conflicts with engagement metrics.

The data point that undermines every “toxicity is inevitable in competitive games” argument: some games have significantly lower toxicity rates, and they achieved this through design choices.
Rocket League consistently ranks among the least toxic major competitive games despite being intensely competitive. Multiple design factors are cited by community researchers: the game is easy to understand and hard to blame teammates for (individual skill is visible and legible), matches are short, and the skill expression is clear enough that players can recognize their own errors rather than attributing losses externally.
Among Us achieved remarkable community warmth for a social deduction game premised on deception – a genre typically associated with aggressive behaviour. Short sessions, voice chat as a game mechanic rather than an ambient feature, and a community that coalesced around humour rather than competition.
Deep Rock Galactic – a cooperative PvE shooter – has a community reputation for positivity so consistent that it became a meme. No PvP means no zero-sum competition. Cooperative success or failure distributes responsibility. The game’s tone – ironic military camaraderie – establishes a community culture from the start.
These games didn’t achieve their community cultures by accident. They achieved them by making design choices that shaped the conditions players would encounter.
Toxicity in online games is a design problem that has been framed as a culture problem because “culture” implies solutions are the community’s responsibility while “design” implies they’re the studio’s.
The evidence supports the design framing. The games with the lowest toxicity rates made structural choices that reduced conditions producing toxic behaviour. The games with the highest toxicity rates are also, with remarkable consistency, the games most dependent on maintaining maximum engagement and emotional investment.
This doesn’t mean every studio is cynically engineering player suffering for profit. It means that incentive structures create outcomes regardless of individual intent, and the incentive structures of major competitive gaming companies are not well-aligned with player welfare.
The question isn’t whether game companies are evil. It’s whether the business model that produces $50 billion annually in free-to-play revenue is compatible with the design changes necessary to meaningfully reduce toxicity.
The evidence of fifteen years suggests it isn’t. Not because it can’t be done, but because no studio has found it profitable enough to prioritize.
That’s the honest answer. Everything else is a press release.
What would you actually need to see from a major studio to believe they were seriously committed to reducing toxicity – rather than managing the PR around it?






