Last week, we touched on the need for ethics in AI. Specifically, we looked at how Microsoft has been leading the charge with their AI for Good Initiative and Aether Committee. We also mentioned some of the criticism that other major players garnered in the AI space for their lack of initiative in incorporating a transparent ethics component in their business model.
A few days after our blog was posted, Google announced the creation of “An external advisory council to help advance the responsible development of AI”. This advisory council, known as ATEAC (Advanced Technology External Advisory Council), will serve to promote fairness in machine learning, ethical use of facial recognition, and bringing a diverse set of views to the development of these advanced technologies. This announcement has not come without its own criticism though, in fact many Google employees have spoken out against the appointing of Kay Coles James. They cite James’ current position as president of the conservative think tank, The Heritage Foundation, as well as her allegedly transphobic views. In addition to this controversial appointee, another member of this external review board, Carnegie Mellon Professor Alessandro Acquisti has already announced his resignation via twitter.
This announcement follows Google’s unveiling of their core AI principles last June, when they revealed 7 key principles that will shape the company’s approach to developing AI technologies. Those principles are as follows: Be socially beneficial, Avoid creating or reinforcing unfair bias, Be built and tested for safety, Be accountable to people, Incorporate privacy design principles, Uphold high standards of scientific excellence, Be made available for uses that accord with these principles. These 7 guidelines certainly seem fairly straightforward and will likely play a big role in Google becoming a more ethical company especially with AI being such a large part of what they do.
Although there is one thing that is missing from these principles that is crucial to creating a solid ethical foundation where these technologies are built upon: transparency. The need for transparency is paramount, as it allows for public trust; otherwise the rest of the guiding principles become less meaningful.
It is rather peculiar that Google chose not to include transparency. Especially when you consider that Google already has an internal ethics review board, which was created as part of an agreement with DeepMind when Google acquired the company in 2014. The review board remains quite secretive to this day. As part of the agreement, if DeepMind is ever to achieve their founding goal of creating AGI (artificial general intelligence), the internal ethics review board will have the sole control over the use of the technology. With the internal board being very secretive, it’s hard to tell whether or not this is a good thing.
Regardless of the lack of transparency and the secretive nature of the internal review board, it seems that Google is certainly taking steps in the right direction. Their 7 principles seem sound and if followed, will bring about a more equitable future for all of their users and customers.