There are a group of AI experts from the University of Nottingham and Kingston University who were able to create a 3D image from a two dimensional image of a face. This is actually really interesting. The researchers trained a convolutional neural network to perform the task by feeding it tons of data on people’s faces. What’s is fascinating about this is that the software was actually able to guess what a face would look like from a previously unseen picture. This includes parts of the face that it can’t actually see in the photo. Because the photos are two dimensional.
If you’re interested in trying this with your own face, you can head over to their 3D Face Reconstruction site and see what you can do. This is all pretty incredible in my opinion. And while you might think that it’s just a photo conversion, it could go much deeper. Maybe I should start by explaining what some of these terms actually mean, in order to give you a better picture of where this could go in the future.
AI uses machine learning in order to process algorithms, which then provide intelligent functions. So the machine needs to learn and understand what it can and can’t do, in order to create an outcome through an AI. Make sense?
When we want AI to get better at something, we create a neural network. These networks are designed to be very similar to the human nervous system and the brain. IT uses stages of learning to give AI the ability to solve complex problems by breaking them down into levels of data. The first level of the network may only worry about a few pixels in an image file and check for similarities in other files. Once the initial stage is done, the neural network will pass its findings to the next level which will try to understand a few more pixels and maybe some metadata. This process continues at every level of a neural network.
Deep learning is what happens when a neural network gets to work. As the layers process the data AI gains a basic understanding. You might be teaching your AI to understand cats, but once it learns what paws are that AI can apply that knowledge to a different task. Deep learning means that instead of understanding what something is, the AI begins to learn “why”.
Now you can understand why Elon Musk thinks that robots are going to take over society in the future. I’ve digressed a little bit from the original intent of the this post. AI, in general, is changing our world rapidly. The researchers have taken something simple, and given us something incredible. Or at least, I think it is. And while I’m not convinced that this particular example will wield any kind of major results, there’s no denying what AI will do for us. Especially when it comes to those neural networks and deep learning. These researchers have provided us with an every day, fun example that we can all relate to and understand. But the truth is AI is going to provide us with easier ways to do things.
I also think that a lot of people don’t really know what AI is embedded in. Or what it could be embedded in. We typically talk about AI being used in something like Google Photos, where it is used to identify parts of our images. For example – being able to identify that 5 of my 10 photos have mountains in the background. That’s using AI. And it’s making my life easier, but it is used in so many other ways. I do like to showcase these examples as I think a lot of people can understand and relate to them. But they also don’t see how it’s used in other cases, and therefore don’t necessarily have a good understanding of what it can do, and what it might be able to do in the future.