In the wake of employee resignations and protests against their tech being used in military drone strikes, Google has reaffirmed its aspiration “Don’t Be Evil.”
Sundar Pichai, CEO at Google, has released a set of ethical guidelines to govern the company’s use of Artificial Intelligence. These new rules ban the development and use of their AI for weapons, and for surveillance tools that would violate “internationally accepted norms.”
For nearly 20 years, Google had the mantra “Don’t Be Evil” in its corporate code of conduct. But it dropped the motto in April or May of this year (2018), which made some commentators (well, me anyway) wonder if Google had decided to actively Be Evil? This thought was reinforced when a number of employees resigned over the company’s involvement in a controversial military drone pilot program, Project Maven.
But don’t worry! Google may help US military AIs recognise and classify targets, but it won’t have anything to do with killing those targets that it has labelled as “Bad Guy #1”. But how does the company square this with bidding for the Joint Enterprise Defense Infrastructure contract? (I know, I know… JEDI isn’t a “weapon”. Screw your sophistry and semantics!) As for refusing to develop “surveillance tools that would violate ‘internationally accepted norms'” – what the hell does that mean anyway? “Internationally accepted” by whom? North Korea? Saudi Arabia? Syria? Myanmar? Can you see the problem here?
So okay, Google isn’t being run by Doctor Evil. But how many Mini-Me clones are working in Research & Development?