amazon alexa

amazon alexa

When it comes to technology, nothing should surprise you.  Nothing surprises me these days.  Which is why, when I learned what a group of students was doing, it didn’t phase me.  Is it highly troubling?  It certainly is, but it doesn’t surprise me.  In a New York Times report, a group of students from the University of California, Berkley and Georgetown University outlines how they are able to outwit smart speakers.  The team has embedded subliminal commands into white noise.  These commands are inaudible to human ears.  Because a human can’t hear anything, we don’t know that it’s going on.  But Siri or Alexa can pick up on these messages, and well… they can be controlled.

The hidden messages could be used to switch a device into airplane mode, open up web pages or even add items to a shopping list.  Don’t get this wrong – these students aren’t doing this because they’re trying to do anything improper.  They want to demonstrate what is possible, in the hopes that researchers will try to fix it.  The key with this is that the technology isn’t being hacked, but rather, it’s being fooled.  Cybersecurity experts use the term “adversarial example” when referring to the potential dangers of tricking AI into recognizing something.

amazon alexa

How are they doing this? With audio attacks, the researchers are exploiting the gap between human and machine speech recognition. Speech recognition systems typically translate each sound to a letter, eventually compiling those into words and phrases. By making slight changes to audio files, researchers were able to cancel out the sound that the speech recognition system was supposed to hear and replace it with a sound that would be transcribed differently by machines while being nearly undetectable to the human ear.

According to the research firm Ovum, the number of smart assistant devices, like Amazon’s Alexa, are set to outnumber people by 2021.  More than half of all American households will have a least one smart speaker by then, so it’s important to get a handle on these things before its too late.  And when I say “too late”, I simply mean by the time someone has figured out how to do it and used it in a bad way. Amazon said that it doesn’t disclose specific security measures, but it has taken steps to ensure its Echo smart speaker is secure. Google said security is an ongoing focus and that its Assistant has features to mitigate undetectable audio commands. Both companies’ assistants employ voice recognition technology to prevent devices from acting on certain commands unless they recognize the user’s voice.

amazon alexa

Currently, there is no law against broadcasting subliminal messages into humans.  Isn’t that kind of funny to you?  In a society where we are overly litigious, there isn’t a law on everything.  The FCC discourages the practice as “counter to the public interest”, but it doesn’t say that you can’t do it.  And that’s just related to humans.  We’re talking machines here, so try getting a law around that one.  That said, courts have ruled that subliminal messages may constitute an invasion of privacy, but again, the law hasn’t extended to machines, such as these digital assistants.  This will certainly have to play out in the courts, but I am happy to see that research is being conducted on what this could do if left unchecked.