Artificial Intelligence has been around for quite some time now. Many industries have been using it in one form or another And most of us don’t even really know that we’re using AI. Google, however, is bringing it to the forefront. And, it kind of is, already. But they’re really going to start bringing it. During the Google I/O Conference this week, Google made several announcements outlining where it’s going to be. And it’s going to be everywhere. It was all over the keynote. With the intention that Google is going to be “AI first”. I wrote a couple of posts this week about some of their more interesting initiatives, but they’re right. AI is involved with all of it.
So what does this mean? Well, I think it means that it’s going to be easier for us to do things. Maybe that’s simplified, but when you get down to it, what else is AI doing? For example, cities use AI to identify infrastructure issues. A vehicle can be equipped with a camera that will automatically detect a pothole for example. This makes people’s lives a bit easier. And I would also add, more efficient. If we take that line of thinking and apply it to what Google is doing, the reach of AI could be endless. Google has it’s fingers in a lot of different pots.
But, the pothole identifying version of AI isn’t what people want. Or at least not most consumers. But that kind of technology can be helpful nonetheless. What do I mean? Google wants to take the idea of machine learning and apply it to machine learning. Meaning, Google wants machines to build and train their own models, which will speed up the development of these new systems. Can this be done? That remains to be seen, but the fact that Google has this on their horizon is huge, and frankly mind blowing. Google is already using machine learning in many, if not all of their products. So it doesn’t seem like it’s that big of a leap.
I’m not an expert on the development of these systems, but my guess is that this would put an end to a lot of tedious code work. For example, a lot of development time goes into a system being able to identify your image. So instead of someone sitting down for hours and hours writing code around identifying that image, the machine would somehow do it for them. I say somehow, because I have no idea how this works. But it is extremely interesting and fascinating. And like I said, this could be an endless reach for Google.
As far as where AI currently exists, we are seeing it in Google Photos or their upcoming launch of Google Lens. And this is just from a consumer perspective. Most people don’t necessarily care or even understand how these products are developed. Or even what kind of effort goes into creating them. These forward facing products is great for Google as it gives them an edge. People can see how Google Lens might make their life easier, and therefore they will use that product. It’s tangible in that people can see how it works, and then apply it to their life. If I go back to the pothole example, that’s not very glamorous. Or even something that the average person would want. But in my opinion those types of examples are more important than Google Lens.
I will be watching closely for future developments of AI on AI, as I am extremely curious to understand how this kind of technology would be applied. I had a conversation with colleagues around how AI could be applied in my industry as it doesn’t get used that often. And it was an interesting conversation because people couldn’t wrap their heads around the technology enough to understand it’s uses and ultimately benefits. Which is why I’m eager to see what Google ends up doing that is a bit more tangible for consumers to understand.