self driving car

MIT and Microsoft Team Up to Prevent Self-Driving Car Accidents

self driving car

As you know, technology can change rapidly. In some cases, it’s a here today, gone tomorrow type situation. With other technology it needs time to evolve. Think about phone technology, for example. If we never started out with a landline, we might never have seen the first cell phone. Let alone the likes of today’s high tech iPhones. Which is why I think we need to take self-driving car technology with a grain of salt. It needs time to evolve. That’s not to say that we an’t challenge the tech in a way that pushes engineers and designers to make it better. It also doesn’t mean that we have to accept the first self-driving car, as is. But what it does mean is that we are going to accept successes with the same level of importance as failures.

Up until now, one of our biggest concerns with self-driving cars is that they still make mistakes. This has a lot to do with the training of artificial intelligence in the car, which can only account for so many situations. Like humans, if you’ve never had a terminal illness, you won’t necessarily know how to react when you find out that your partner does. Does that make you a bad person? No, it simply means that you haven’t (thankfully) had those experiences before and your brain doesn’t know what to do with the experience.

I’m not sure that I can help you in this situation, but MIT is working with Microsoft to be able to help with the AI issue. In fact, they’ve developed a model that can catch virtual “blind spots”, as MIT describes them. The approach has the AI compare a human’s actions in a given situation to what it would have done, and alters its behavior based on how closely it matches the response. If an autonomous car doesn’t know how to pull over when an ambulance is racing down the road, it could learn by watching a real driver moving to the side of the road.

This model would also work with real-time corrections. If the AI stepped out of line, a human driver could take over and indicate that something was wrong. Again, if it’s the instance of moving out of the way for an ambulance, all the person has to do is pull the car over. Pretty simple, in a lot of ways.

Further, researchers even have a way to prevent the driverless vehicle from becoming over-confident and marking all instances of a given response as safe. A machine learning algorithm not only identifies acceptable and unacceptable responses, but it uses probability calculations in order to spot patterns and then determine whether or not something is truly safe. The action might be correct 90% of the time, but this would account for other weaknesses that researchers need to address.

This technology isn’t quite ready to be tested in the field just yet. And that’s ok. As I said earlier, I think its important for technology to evolve as we look for improvements. Scientists have only tested this model with video games, where the parameters are limited and the conditions are relatively ideal. But, if this technology works, it would take self-driving car technology a long way towards becoming practical. But more importantly, this kind of technology could help prevent accidents, or putting passengers in harms way.