Google has announced the formation of an AI ethics panel, which will tackle how the company should responsibly use and develop AI projects.
Made up by a diverse group from academic, corporate and government backgrounds, the council will meet regularly in 2019 to discuss the issues AI poses.
The panel is an extension of Google's AI Principles, which is a broad set of goals that the company put in place last year, after criticism about its involvement with military contracts.
What is the Panel and What Are its Aims?
The Advanced Technology External Advisory Council (ATEAC) was announced by Google's Senior Vice President, Kent Walker, on its blog. According to the post, the panel is an extension of Google's goals to use and create AI responsibly, a position that was laid out in its AI Principles in June. Some of the tasks that panel have been challenged with tackling, include facial recognition and machine learning.
The council will serve over the course of 2019, and hold four meetings in this period, with the first occurring in April. In its blog, Google states that it will encourage members of the council to share learnings, and publish a report summarizing the discussions.
What are Google's AI Principles?
Revealed last year, Google's AI Principles are said to be a direct reaction to its decision not to renew its contract with the Pentagon to provide AI drones.
Google's work in this area had proved controversial, prompting several resignations from Google staff. It appears to be a growing issue in the tech space, with other companies such as Microsoft feeling the heat from their work with defence departments. Google's AI principles set in stone its mandate going forward, and while the company was much mocked for removing its “Don't be evil” policy from its code of conduct (replacing it with “Do the right thing”), the AI principles arguably go much further than a simple tagline.
So, what are the principles?
- Be Socially Beneficial – It's not enough for an AI project to be profitable, it must also have a positive impact on society
- Avoid Creating or Reinforcing Unfair Bias – As other tech companies have found, AI relies heavily on its data sets, which can be skewed by human involvement. Amazon discovered this when its hiring AI was making sexist decisions and favoring men, purely due to the human behavioural trends it had been fed.
- Be Built and Tested for Safety – The rise of AI will have a big impact on our safety, especially in areas such as automated transportation, where life and death decisions are trusted to a machine.
- Be Accountable to People – Google states that its AI projects will be open to feedback and subject to human direction and control.
- Incorporate Privacy Design Principles – Google's AI will have built-in data protection, and any information collected will be transparent and provide the user with control over what is gathered.
- Uphold High Standards of Scientific Excellence – Google aims to share its knowledge around AI with key stakeholders, with a view to opening up new avenues for the technology through collaboration.
- Be Made Available for Uses that Accord with these Principles – The company vows that it will limit AI's ability to be harmful to society or the individual with abusive applications.
In addition, there are also areas that Google has promised it will not apply its AI to. These include surveillance that “contradicts international norms”, technologies that violate human rights, and weapons that aim to harm people (although the company will continue to work with the military in recruitment, cyber security, and search and rescue).
Who is on the Panel?
ATEAC is made up of eight key stakeholders with years of experience in the AI space from corporate, academic and government backgrounds. Google makes it clear that the panel represents their own perspectives, and don't speak for the institutions they are associated with.
Alessandro Acquiti – Professor of Information Technology and Public Policy at Heinz College, Carnegie Mellon University
Bubacarr Bah – Senior Researcher of Mathematics with specialization in Data Science at the African Institute for Mathematical Sciences South Africa, and Assistant Professor in the Department of Mathematical Sciences at Stellenbosch University.
De Kai – Professor of Computer Science and Engineering at the Hong Kong University of Science and Technology, and Distinguished Research Scholar at Berkeley's International Computer Science Institute.
Dyan Gibbens – CEO of Trumbull, a startup focused on automation, data and environmental resilience in energy and defense.
Joanne Bryson – Associate Professor in the Department of Computer Science at the University of Bath. Also consulted with LEGO on its child-orientated Mindstorms programming line.
Kay Coles James – President of The Heritage Foundation, focusing on free enterprise, limited government, individual freedom and national defense
Lucian Floridi – Professor of Philosophy and Ethics of Information at the University of Oxford, Professorial Fellow of Exeter College and Turing Fellow and Chair of the Data Ethics Group of the Alan Turing Institute.
William Joseph Burns – Previous U.S. deputy secretary of state. President of the Carnegie Endowment for International Peace, the oldest international affairs think tank in the United States.
With the pace of development in the AI sector, these principles certainly represent an important code of stated conduct. What remains to be seen, of course, is how well Google holds true to these lofty aims.