facial recognition software
deep fake

Facebook announced that they are paying for the creation of their own deepfake videos, which will be used to make a data set. But does this make any sense? I mean, is this a fight fire with fire type scenario? Facebook hopes people in the artificial intelligence community will use the data set to come up with new ways to spot these kinds of videos online. The hope is that the videos will include images of actors, which will help to stop the spread of deepfake videos.

Deepfake videos use artificial intelligence to realistically show people doing and saying things they didn’t actually do or say. This is especially important as the 2020 US presidential election approaches. And it makes sense that politicians and government officials are worried about deep fake videos being created in order to mislead voters.

Facebook is commissioning its own deepfake videos as part of a competition that they are sponsoring. The competition, known as the Deepfake Detection Challenge, will offer grants and awards in an effort to garner more participation from artificial intelligence researchers. In fact, Facebook is putting up more than $10 million and is working with a number of organizations on the competition. Of which includes Microsoft, MIT, the University of California, Berkley, and the Partnership on AI.

So how does this all work? The videos will be made with paid actors, who understand that they are participating in manipulated video data. Facebook plans to release the data set in December.

And what is the purpose of this competition? Facebook is hoping that technology will be developed from all of this. More specifically they’re looking for a system that can determine whether a video has been altered. There is hope on this front. Researchers and a couple of startups are already working on this problem. To date, there are a number of methods for finding deepfakes. Methods include looking at the video for things like weird shadows and strange artifacts. But this becomes increasingly difficult as the technology behind the deepfakes is quickly evolving.

The larger question that we should be considering is who is responsible to “solve” this problem? We’ve had this conversation before. Should social media platforms be responsible to stop or prevent certain activity? In my opinion, the answer is two-fold. To start, we need regulations. Regulations that will outline who is responsible for what. The second part is a common understanding of what is morally acceptable. But that is going to be much harder to obtain. For example, maybe I think deepfakes are morally wrong, but my neighbor doesn’t. Is it ok for him to create a deepfake video? Again – in my opinion, the answer is no, but how can we police that?

As of right now, the only thing we have are the tech giants deciding when and where to draw the lines. What will it take in order for someone to finally say that it’s time for real regulation? Perhaps we will see more of it after the 2020 election, but until then, we have to leave it with the tech giants.

One thought on “Who Should be Responsible for Technological Morality?”

Comments are closed.