facebook

Facebook has over 2 billion users.  That’s almost 30% of the world’s population. Now, how many of those users are actually fake profiles or duplicates are unknown.  Regardless though, Facebook has a really difficult task.  They have to be able to police what users are doing to make sure that it conforms to their terms and conditions.  Facebook does have a team of 7,500 content moderators, which is in addition to their algorithms.  So they should be able to get this right, shouldn’t they? Both the content moderators and the algorithms scour through posts that Facebook deems inappropriate.  Like terrorist material or images of child abuse.  But one thing remains unchanged – they don’t always get it right.  Part of this is due to their ambiguous guidelines.

The non-profit, ProPublica, has unearthed some pretty large cracks.  They sent a sample of 49 items which contained hate speech to Facebook.  Of the 49 items, Facebook’s reviewers made 22 mistakes.  In six of these cases, Facebook actually blamed the users for not flagging the posts correctly. Who is going to openly admit that their posts are racist or sexist? Facebook then defended 19 of their decisions.  Some of which included sexist, racist and anti-muslim information.

facebook

But since then, Facebook has apologized, stating: “We’re sorry for the mistakes we have made.  We must do better.”

But is that enough?  I mean, Facebook is notorious for making these kinds of mistakes, so they should already be doing better.  This isn’t the first time that they’ve been criticized for inappropriate content being shown on their site.  It makes you wonder if they truly mean what they’re saying or if this is just lip service? That said, Facebook has indicated that they will increase their safety and security team from 7,500 to 20,000.  (Side note, if you’re looking for a job…) But will the increase of “man” power actually help them implement their community standards?  That remains to be seen.  As of right now though, Facebook deletes approximately 66,000 posts each week.

Like I said earlier though, part of the reason for posts not getting deleted is due to Facebook’s ambiguous Community Standards.  While I won’t state exactly what was said, one person saw a graphic Facebook photo that declared death to the people of one particular religious group.  That user flagged it as hate speech using Facebook’s reporting system.  Their response? “We looked over the photo, and though it doesn’t go against one of our specific Community Standards, we understand that it may still be offensive to you and others”.  Excuse me? How is that not a violent threat against a person based on their religious beliefs? It wasn’t until ProPublica brought this to Facebook’s attention that they actually took down the post.

facebook

It’s interesting to me that Facebook often ignores requests to delete hateful content that do violate their guidelines.  I honestly don’t understand how this can happen.  Facebook’s guidelines are very literal in defining a hateful attack.  This means that posting or expressing bias against a specific group, which lack explicitly hostile or demeaning language, often stay up.  Even if they’re using sarcasm.  Which tends to be some people’s defense of the posts.  Another post, which I will not repeat because it’s offensive, painted African-American people in a very stereotypical light.  And yet Facebook kept that post up because it didn’t include a specific attack on a protected group.

Yet another post, which demonstrated exasperation with racial inequality was removed.  This post essentially just made a statement which indicated that racial inequality exists.  Sure, the post used explicit language and pointed to a race that tends to get preferential treatment, but there was nothing explicitly offensive about that post. Which makes you wonder how biased the content moderators actually are? In theory, every person can interpret rules and policies differently. Which means that two of the content moderators can see a post and make different rulings on that post.  To me, a better way to determine inappropriate content would be through a test. How exactly?  Well, the test is based on specific criteria.  If it meets that criteria, then Facebook removes the post. If you allow people the ability to make a judgment call, I think we’re going to stay in this position, without any improvement.

One thought on “Do Facebook’s Community Standards Allow Hateful Posts?”

Comments are closed.