facebook

facebook

Facebook has some pretty weird policies.  At least in the sense of how they’re applied.  For example – they have a content policy which speaks to when a post has crossed the line, but they don’t seem to apply that policy consistently.  Which means some that do cross the line make it through, and some that don’t cross the line get caught.  I think that has a lot to do with the people who are responsible to apply the policy.  You might have one person who thinks that something is racist, and doesn’t allow it to get posted.  But you could have another person who reviews the same material and doesn’t see it as racist.  Who is right?

I’m not sure that there is a good answer for that, but if there is one positive thing that’s coming out of this Facebook scandal, it’s the fact that we’re getting to know more about how they operate.  Facebook took the step of publishing its internal moderation policy.  The big question that we have for that is why didn’t this happen sooner?  I mean, why would it take this long to get released in the first place?

facebook

The Community Standards will run you about 27 pages, and it’s broken up into sections that deal with some of the biggest problems that Facebook has to deal with.  It ranges from topics like violence, criminal behavior, user safety, objectionable connect, integrity, authenticity, copyright material and content related requests.  Facebook says that the standards are “designed to be comprehensive – content that might not be considered hate speech may still be removed for breaching our Bullying Policies,” and that the standards were designed with “input from our community and from experts in fields such as technology and public safety.”

Monika Bickert, VP of Global Product Management had this to say:

“We have people in 11 offices around the world, including subject matter experts on issues such as hate speech, child safety, and terrorism. Many of us have worked on the issues of expression and safety long before coming to Facebook.  I worked on everything from child safety to counter terrorism during my years as a criminal prosecutor, and other team members include a former rape crisis counselor, an academic who has spent her career studying hate organizations, a human rights lawyer, and a teacher.”

That’s all fine and well, but 11 employees doesn’t seem like nearly enough when it comes to the kinds of content that could be posted.  What’s interesting is that objectionable content refers to a couple of different things.  It could be n**e or s**** content, but it could also be hate speech.  The content policies take a hard line on hate speech, defining it as “a direct attack on people based on what we call protected characteristics”.  Their list includes race, ethnicity, sexual orientation, gender, gender identity, religious affiliation as well as some others.

facebook community standards

Facebook then defines anything that meets those criteria as Tier 1, Tier 2 or Tier 3.  Tier 1, for example, would be some kind of speech that supports death/disease or harm.  While Tier 3 is something that calls to exclude or segregate a person or group of people based on the characteristics listed above.  Basically one is discrimination, and one is a full out attack.

What does this mean?  Honestly, nothing.  They are being more open and transparent, but this doesn’t necessarily speak to if this is actually being done.  Facebook has also unveiled an appeals process for anyone who has had content removed, due to a violation.  Given that Facebook’s moderation decisions have been final in the past, this is a major improvement.  The interesting thing is that if you’ve been removed for posting hate speech, you might be able to get back onto Facebook?  How does that work exactly?  Either way, all of this is interesting and I want to see if anything comes of it in the long run.