twitter

twitter

A couple of weeks ago, we brought you a story about how Twitter had verified Jason Kessler.  Kessler is the white supremacist who helped organize the “Unite the Right” rally in Charlottesville, Virginia this past August.  This, rightfully so, made a lot of people angry.  To the public, it was like Twitter was endorsing Kessler by verifying the account.  But that wasn’t the case.  Because of this, Twitter promised to review their verification policies.  But will this make a difference? As a result of this new policy, Kessler and a handful of other white nationalists lost their blue checkmark verifications.  So did this make everyone happy?  Kind of.  But I’m not sure that this is the solution.

The new guideline states that “Twitter reserves the right to remove verification at any time without notice.  Reasons for removal may reflect behaviors on and off Twitter.”  On and off Twitter?  That means you can do something awful in real life, and be un-verified from Twitter.  While this might sound like an improvement, how does Twitter think they’re going to be able to handle this?  After all, they are now going to have to police its users when they’re not on Twitter.  How exactly can they do this? Honestly, that’s unclear.  And the wording isn’t accidental.  Twitter has actually gone pretty far with their approach.  On their website, they list the following as reasons to un-verify someone:

Reasons for removal may reflect behaviors on and off Twitter that include:

  • Intentionally misleading people on Twitter by changing one’s display name or bio.
  • Promoting hate and/or violence against, or directly attacking or threatening other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. Supporting organizations or individuals that promote the above.
  • Inciting or engaging in harassment of others.
  • Violence and dangerous behavior
    • Directly or indirectly threatening or encouraging any form of physical violence against an individual or any group of people, including threatening or promoting terrorism
    • Violent, gruesome, shocking, or disturbing imagery
    • Self-harm, suicide
  • Engaging in activity on Twitter that violates the Twitter Rules.

Which means, if you’re engaged with a group, in real life, that doesn’t follow these rules – and you are a verified Twitter user, you could now be punished by the company.  Is this fair?  On one hand, I want to say yes.  I mean, if you’re involved in some kind of hate group, I don’t think you should receive a Twitter verification.  On the other hand, I think that this is a slippery slope.  Not because they are going to be un-verifying white supremacists, but because Twitter is going to have to be really hard on people for participating in or voicing their opinions on a variety of social issues.  Does this make Twitter the moral police?  Especially as Twitter has been known to be a defender of free speech – often at the expense of users who might feel like they are being bullied or harassed.

Will this improve anything on the social media platform?  Maybe, and maybe not.  It will be interesting to see where this goes and how far Twitter will be able to take it.  This sounds like a big undertaking also.  Will Twitter use some kind of algorithm to find out the information they need?  Or will they use humans?  If the former, there are accounts that are bound to slip through the cracks, so perhaps a combination is necessary.