google location tracking
artificial intelligence

Artificial Intelligence is an enormously powerful technology that requires organizations to use it carefully. Can they? And what does carefully mean? As it were in relationships, in order to reduce risks, you have to increase your level of trust. With artificial intelligence, that means leaders having to face some pretty tough questions. When it comes to artificial intelligence, what is “responsible”? In short, responsible artificial intelligence means that the technology is explainable. But it also means the ability to control problems and correct them before they even start. For example, autonomous cars have been around for a while now, but people really started to take notice when these cars start to get into accidents with human beings. This makes people skeptical about the technology, when really, the issue should have been caught before it got that far.

That said, we are all human so some mistakes are going to be made. How bad, and to what extent is up to those who are responsible for the technology in the first place. No pun intended. That’s why Google is joining Facebook, Stanford and other organizations who are setting up institutions to support ethical AI. Google has created an Advanced Technology External Advisory Council that will help shape the development and use of responsible AI. This includes facial recognition, fair machine learning algorithms and other ethical issues related to AI.

The current advisors include academics focused both on technical aspects of AI (such as computational mathematics and drones) as well as experts in ethics, privacy, and political policy. There’s also an international focus, with people ranging from as far afield as Hong Kong and South Africa.

This group will hold its first meeting in April, and plans three more before the end of the year. This will clearly play into Google’s development process, but the company will publish summaries of its talks in order to spur members to share information within their organizations. What’s interesting about this set-up is that the aim is to improve the tech industry overall, and not just the work that Google will be doing.

Google believes the following about artificial intelligence:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Back in 2018, Google also provided us with the things that they won’t pursue, which includes:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

This is all good news, given the fact that Google is such a large company and well, artificial intelligence technology is still relatively unknown. And by that I mean the world doesn’t really understand it, and in some cases, neither do we. But all that could change over the next few years.

One thought on “Google Takes Steps Toward Meeting Their Responsible AI Objectives”

Comments are closed.