artificial intelligence

virtual reality

Over the last few years, we have been presented with a number of ethical dilemmas related to technology.  Every time some kind of new technology is developed, there are all kinds of questions that we need to ask.  This technology has some incredible capabilities and is certainly shaping the world that we live in, and the way that we access the world.  We’re starting to see self-driving cars and it makes you wonder what our future will look like.   While it’s great for me to pose these questions to you when new technology comes forward.  But the bigger question is – who gets to decide what is ethical when it comes to technology?

It’s not that easy to just appoint someone the ethical technology czar. (I hope Donald Trump doesn’t get his hands on that phrase) We have to consider some things before we can decide who is qualified to make these decisions.  To start, we have an incredibly slow and arduous legislative process.  The process itself is intended to be that way, but when it comes to technological advancement, it’s doesn’t move fast enough.  Which means, an ethical decision can be made before the bill even hits the Senate floor.


That’s only one challenge, another one is the balance of power.  We also need to be careful not to tip the scales of power. If one class of people, or one country, get access to an extremely powerful or advanced technology, it could result in inhumane levels of inequality, or war. If one authority is allowed to make all ethical decisions about tech, those decisions could unfairly work in its favor, at the expense of everyone else involved.

How a person makes a decision is extremely important.  Ethics can be subjective.  What I think is morally right, another person may disagree with completely.  Decisions need to be fair.  So who is going to be able to make an educated decision?  And oh boy – how do we measure education in this instance?  Lastly, we also need to consider the consequences of these actions in multiple areas.  When making a decision, we might be safeguarding human life, but we also need to think about human health, human psychology and the general wellbeing of our planet.

flying car

This article isn’t intended for me to come up with a solution to this potential problem.  But rather, I was hoping to highlight the fact that it is a problem.  And it will only become more problematic over the years if it doesn’t get addressed soon.  The following is a list of people who could make these decisions if they were given the authority to.  Out of this list, who would you trust to make a judgment call on robots?

  • Scientists
  • Inventors and entrepreneurs
  • Regulators
  • The public at large
  • An external body

driverless car

When it comes to making these decisions, I don’t think it’s as clear-cut as we might think.  Sure, one of these groups could make an informed decision, but I think it needs to be a combination of more than one of these groups. I’m not here to say what I think we should do in this instance, as I don’t think we have all the information to make this call.  But I do think that we should be considering what will happen if someone doesn’t make the decisions for us.