I wrote an article last week about the AlphaGo and AlphaGo Zero “robots” that used machine learning to beat humans in Go. But I want to explore the technology behind what made all of that possible. If you want to read more about that particular battle, check out my previous post. Google is making progress in the field of machine learning at a startling rate. Their AutoML recently made jaws drop with its ability to self-replicate. DeepMind is now able to teach itself better than the humans who created it can. DeepMind is the machine behind both versions of AlphaGo.
The original AlphaGo has 48 AI processors. Built into the 48 processors is the data from thousands of Go matches. That’s right, I said thousands. When it was developed, it had a pretty decent understanding of the game. Over time, and with some help from humans, it began to learn the game and the nuanced strategies it needed to eventually succeed. Which is what happened when it was able to defeat the world’s top human player. This provided us (or them as it may be) with more AI supremacy. The game is extremely difficult. Some say that it makes chess look like checkers.
Google decided that AlphaGo wasn’t good enough. Which is where AlphaGo Zero came in. In fact, it was so good, it was able to defeat AlphaGo, literally at its own game, in only 40 days. 40 days, friends. I could make biblical jokes here, but I’ll refrain. The shocking part though – and this will blow your mind – AlphaGo Zero only has four AI processors. Four! Further, the only data that it was given was the rules to the game. No one taught it how to play or gave it thousands of matches to study. Making this even more incredible. That’s how we get this idea of singularity.
Singularity for those of you who don’t know is the theory that the invention of artificial superintelligence will trigger runaway technological growth, resulting in changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.
Google had this to say about AlphaGo:
This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn tabula rasa from the strongest player in the world: AlphaGo itself.
The AI plays Go against itself. Which makes it improve with every single match. Both versions of the machine play the game at a level that’s considered superhuman. The speed by which Google’s AutoML and DeepMind have taken machine learning to the next level is incredible. It’s also wonderful and terrifying at the same time. Maybe we’ve been too hard on Elon Musk over the last few years. He believes that we should be taking AI more seriously, and perhaps this is why.