Saintel Daily

If it Happened | We Covered it

Can Gaming Rules Be Created to Stop Online Harassment and Abuse?

online gaming
Is it possible to come up with a set of rules that will help prevent online harassment or abuse? Where do you draw the line and what do you even consider harassment?

online gaming

For many years now, gaming companies like Riot, Blizzard, and Twitch have been fighting online abuse and harassment.  But they haven’t been having as much luck as they would like.  Now, they’re trying something different – they are going to collaborate in this initiative.  The Fair Play Alliance is a coalition made up of over 30 different companies – including Riot, Blizzard, Twitc, CCP, and Epic.  Their aim?  To share research and lessons learned around gaming communities in an attempt to cut down on this disruptive behavior.  According to Kimberly Voll, the goal is to create a more consistent set of behavior standards between companies. Voll is a senior technical designer with Riot.

This is a huge issue though.  There are a ton of people out there who are not necessarily playing by the rules.  But the rules aren’t necessarily consistent.  The hope with this coalition is that developers won’t have to reinvent the wheel when it comes to creating online games.  Riot isn’t exactly the best example, as they have made their share of mistakes.  But that’s the point.  Games like League of Legends exemplify how difficult it is to bring communities back from the brink. Ideally, the Fair Play Alliance will allow other companies to directly learn from each other’s mistakes without stumbling into the same pitfalls.


How can they do this?  The first step in this process is a day-long summit at GDC hosted by the Fair Play Alliance.  And guess who is giving the keynote?  Voll herself.  Developers and creators from Activision, Epic, Supercell, and others will openly discuss research, issues they’ve faced, mistakes they’ve made and what they’ve learned in the process.

This is a solid first step, but beyond that, there isn’t a lot of teeth to this requirement.  And, according to Voll – there isn’t a lot of tangible items either.  Which makes me wonder how good this will be?  I’m not saying that it’s not a good first step, but when it comes to these things, having a solid plan is better than a solid first step. Ideally, they would like to be able to share things like resources and develop a system that will allow developers to reach out to knowledgeable individuals when they’re struggling to solve abuse or harassment related issues.


Which kind of sounds like they’re just getting off the ground.  Their goals are lofty – like creating a consistent set of standards and rules across multiple multinational companies.  This will require a lot of effort in order to develop and implement.  Again, I’m not saying that these aren’t a good idea, it just seems like a really large scale project that might need a narrowed scope to be successful.

The big challenge that this group faces is determining what good behavior looks like.  Or maybe, determining what bad behavior looks like.  More specifically, they have to figure this out on a global scale.  If you’re playing with your friends, and you want to call someone a horrible name because that’s what friends do, is this ok?  Taking that language to another group isn’t ok in most instances, so where do you draw the line?

This whole topic needs some extremely careful consideration.  I do look forward to seeing what they put together because I think the work is important, but I’m also skeptical that they can deliver on such a large scale project.

%d bloggers like this: