When Google Cloud chief Diane Greene announced that Andrew Moore would later this year replace Fei-Fei Li as head of artificial intelligence for Google Cloud, she mentioned he was dean of the school of computer science at Carnegie Mellon University and that he formerly worked at Google.
What Greene didn't mention was that Moore also is co-chairman of an AI task force created by theCenter for a New American Security(CNAS) a think tank with strong ties to the US military. Moore's co-chair on the task force is Robert Work, a former deputy secretary of defense, who the New York Times has called "the driving force behind the creation of Project Maven," the US military's effort to analyze data, such as drone footage, using AI.
Google's involvement in Project Maven caused a huge backlash inside the company earlier this year, forcing CEO Sundar Pichai to pledge that Google would never work on AI-enhanced weapons.
The hiring of Moore is sure re-ignite debate about Google's involvement in certain markets for artificial intelligence - one of the hottest areas of tech with a massive business potential - and the relationship the company maintains with the military.
During his tenure at Carnegie Mellon, Moore has often discussed the role of AI in defensive and military applications, such as his 2017 talk on Artificial Intelligence and Global Security:
"We could afford if we wanted to, and if we needed, to be surveilling pretty much the whole word with autonomous drones of various kinds," Moore said. "I'm not saying we'd want to do that, but there's not a technology gap there where I think it's actually too difficult to do. This is now practical."
CNAS, the organization that formed the task force Moore on AI and security, focuses on national security issues and its stated mission is to to "develop strong, pragmatic and principled national security and defense policies that promote and protect American interests and values."
Google's decision to hire Moore was greeted with displeasure by at least one former Googler who objected to Project Maven.
A Google spokesman declined to comment.
A voice of caution on deploying AI in the real world
Moore, who was born in the United Kingdom but has since become a US citizen, has frequently spoken out about the need for caution in taking AI out of the lab and into the real world. When the CNAS task force was announced in March, Moore stressed the importance of "ensuring that such systems work with humans in a way which empowers the human, not replaces the human, and which keeps ultimate decision authority with the human."And on a recent CNAS podcast, he described what he called his "conservative" view on AI in the real world: "Even if I knew that for instance launching a fleet of autonomous vehicles in a city would reduce deaths by 50%, I wouldn't want to launch it until I came across some formal proofs of correctness which showed me that it was absolutely not going to be involved in unnecessary deaths."
And on a recent CNAS podcast, he described what he called his "conservative" view on AI in the real world: "Even if I knew that for instance launching a fleet of autonomous vehicles in a city would reduce deaths by 50%, I wouldn't want to launch it until I came across some formal proofs of correctness which showed me that it was absolutely not going to be involved in unnecessary deaths."
Still, he has not shied away from dealing with the military sector.
Moore's Carnegie Mellon bio mentions past work involving "detection and surveillance of terror threats," and he's listed as fact finding contributor on a September 2017 Naval Research Advisory Report on "Autonomous and Unmanned Systems in the Department of the Navy."
During the 2017 talk on global security, he mentioned the possibility of incorporating digital personal assistants, such as those used in consumer gadgets made by Google and Amazon, into military applications. "There is an open question as to whether and when and how we can develop personal assistants for warfighters and commanders to have that full set of information which helps remove the 'fog of war,' without getting in their way with too much information," he said.
Google hired Moore to oversee the AI efforts within Google Cloud, the unit that offers Google's popular cloud-computing services, such as data storage, computing and machine learning. He replaces Li, who has returned to her professorship at Stanford.
His hiring comes as Google tries to move past the controversy that erupted when the company's involvement in Project Maven became known.
Earlier this year, when word leaked that Google was assisting the military to analyze drone footage, thousands of Google employees signed a petition demanding that management end the company's involvement. Others refused to work on the project or leaked documents to reporters that proved embarrassing for management. About a dozen employees resigned in protest.
In June, Google CEO Pichai appeared to yield to their demands. He released a list of seven principles that would guide the company's development of AI. They included never building AI-enhanced weapons and ensuring AI is applied to applications that are socially beneficial, safe and won't create unfair bias. The company did not rule out working with the military on services that don't violate the principles, such as e-mail or data storage for example.
The feeling of many of those opposed to Maven inside Google was that the company should not be involved in any way with the military. And for at least some of Google's staff who participated in the Maven protest - as well as for former employees sympathetic to their cause - Moore's hiring will raises questions about Google's commitment to those AI principles.
Moore himself has acknowledged the potential dangers of weaponized AI.
"Just as it's a good thing that we're able to do AI so quickly," he said during the 2017 talk, AI is also a "threat."
"Just as one of our genius grad students can come up with something quickly, so can someone less desirable. And we have to be ready for that in what we're doing," he said.