Artificial Intelligence is an enormously powerful technology that requires organizations to use it carefully. Can they? And what does carefully mean? As it were in relationships, in order to reduce risks, you have to increase your level of trust. With artificial intelligence, that means leaders having to face some pretty tough questions. When it comes to artificial intelligence, what is “responsible”? In short, responsible artificial intelligence means that
That said, we are all human so some mistakes are going to be made. How bad, and to what extent is up to those who are responsible for the technology in the first place. No pun intended. That’s why Google is joining Facebook, Stanford and other organizations who are setting up institutions to support ethical AI. Google has created an Advanced Technology External Advisory Council that will help shape the development and use of responsible AI. This includes facial recognition, fair machine learning algorithms and other ethical issues related to AI.
The current advisors include academics focused both on technical aspects of AI (such as computational mathematics and drones) as well as experts in ethics, privacy, and political policy. There’s also an international focus, with people ranging from as far afield as Hong Kong and South Africa.
This group will hold its first meeting in April, and plans three more before the end of the year. This will clearly play into Google’s development process, but the company will publish summaries of its talks in order to spur members to share information within their organizations. What’s interesting about this set-up is that the aim is to improve the tech industry overall, and not just the work that Google will be doing.
Google believes the following about artificial intelligence:
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
Back in 2018, Google also provided us with the things that they won’t pursue, which includes:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
This is all good news, given the fact that Google is such a large company and well, artificial intelligence technology is still relatively unknown. And by that I mean the world doesn’t really understand it, and in some cases, neither do we. But all that could change over the next few years.