social network
social network

You heard last week that Australia has plans to try to regulate social networking platforms in an attempt to stop hate speech. You also might recall Mark Zuckerberg’s opinion piece in the Washington Post that outlined the need for more regulations by governments throughout the world. Which means, it should come as no surprise that the United Kingdom is looking to implement regulations that would make social networking platforms more accountable for the spread of “online harms”. In this case, “online harms” are considered misinformation, terrorist propaganda, and content depicting child sexual abuse. While I certainly agree that social networking platforms have a role to play in stopping this kind of content, the question I’d like to ask is why those who are creating the misinformation aren’t being held to the same level of accountability?

The UK’s Home Office, along with the Department of Culture, Media and Sport made this announcement in order to ensure measures are in place that would fine social networks or block them entirely if they violated these particular terms. Germany also introduced similar legislation last year, but only allowed social networking sites 24 hours to take down “obviously illegal” content, or face some serious fines. That kind of wording seems vague, so I’m definitely interested to see how they’re handling it. Does Facebook have its own list of what might be considered “obviously illegal”, or does the legislation spell that out for them? Without specifically knowing what that means, this could go either way.

I asked this question last week, and I will ask it again – is this kind of regulation even feasible? Like I said in my previous post, this means that Facebook (for example) would have to have some kind of office in the UK, where they are specifically overseeing this legislation. While this might be technically feasible, it would also mean a huge influx of jobs within the UK (or Australia). Which might not necessarily be a bad thing.

The Online Harms White Paper was published in advance of this announcement, and it lists various kinds of offensive content that UK regulators want to tackle, but it also suggests measures that companies should take in order to keep their platforms free of the content that they are trying to avoid in the first place. This includes fact-checkers and even promoting legitimate sources of news. We know that Facebook has already started to employ these kinds of tactics, given all the backlash that they are facing from Congress in the United States. But that doesn’t necessarily eliminate the problem.

In the UK, there are plans to create a watchdog agency that would oversee social networking companies efforts to curb the spread of misinformation, but it’s possible that an existing government body will be given the authority to “police” tech firms.

The other big question that I always ask in these instances is whether or not this particular legislation are contrary to an individuals right to free speech? Further to that, is it possible that a regulatory authority will be able to effectively sanction companies who violate these rules. I mean, if Facebook is doing its job, for example, then they’re going to be able to take down that offensive content before the authorities know. How long do they have to remove the content? If they don’t remove the content is there an appeals process to disqualify that particular content?

As I’ve said in my other posts, I think regulation is necessary, but as we’re seeing with this particular case, it might be difficult to monitor, police and effectively fine. This is certainly an example of negative reinforcement, but I wonder if we could look at this from a positive perspective and if that might move the needle farther in the direction that we’re looking to go?

One thought on “UK Puts Rules in Place to Start Regulating Social Networking Companies”

Comments are closed.