elon musk

artificial intelligence

We have joked on here about Elon Musk and the robot uprising.  But the truth is, there is the possibility for technology to be used in a nefarious way.  What that is, or what that will actually look like is anyone’s guess. I think the fear is that there are a lot of bad people in the world, and what they choose to do with their money (or their country’s money if they’re a leader) can be a scary thought.  Which is why I was happy to hear that 2,400 individuals and 160 companies and organizations signed a pledge, which indicates they will:

“neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons.”

What does that really mean though?  To back up a bit, during the Joint Conference on Artificial Intelligence, the Future of Life Institute made this announcement.  Those who signed the declaration are from 90 countries globally, and they are also calling on governments to pass laws against these weapons as well.  Before I go too much further, I’d like to pose another question.  Does a signature or this declaration mean anything in the end?  On one hand, it signals to the world that country x will not participate in the development of these weapons.  But what if someone else comes into power down the road?  Perhaps they have a different view on it.  I hate talking in worst case scenarios, but I think we have to in this particular case.

artificial intelligence

Google DeepMind and Xprize Foundation are among the groups who have signed the declaration.  DeepMind co-founders Demis Hassabis, Shane Legg and Mustafa Suleyman have signed as individuals.  But let’s not forget about Elon Musk.  Of course, he was there signing, because like I’ve said many times before, he is very worried about the robot uprising.  But why are they signing this pledge now?

Many companies are facing backlash over their technologies and how they’re providing them to government and law enforcement agencies.  Google has come under fire for its Project Maven Pentagon contract, which is providing AI technology to the military in order to help them flag drone images that require additional human review. Microsoft has been called out for providing services to Immigration and Customs Enforcement (ICE).  So it kind of makes sense that there are organizations out there pledging that they won’t use technology for bad things.  Or at least worse things.

artificial intelligence

The pledge says:

“Thousands of AI researchers agree that by removing the risk, attributability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”

President of the Future Life Institute, Max Tegmark had this to say:

“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world — if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

Tegmark isn’t wrong, so I’m happy to see that there are steps being taken.  But like I said earlier – by simply signing this pledge, doesn’t necessarily mean that the company isn’t going to do something down the road.  Which is why it’s great to see companies like Google already releasing their own set of principles, which guide the company’s ethics on AI technology.  Microsoft has stated that their work with ICE is limited to email, calendar, messaging and document management and specifically doesn’t include any facial recognition technology.  They too are also working on a set of guiding principles for their facial recognition work.  For better or for worse, it’s these tech giants that run the world right now.  Hopefully, they use that power wisely.