Google has published a set of principles that will guide its work in artificial intelligence, following controversy over its involvement in a drone project with the US Defense Department, The Verge reports. The company pledges to never develop AI for use in weaponry and sets broad guidelines for AI, touching on issues like bias, privacy and human oversight. However, Google says it will continue to work with the military “in many other areas,” and its involvement in the Pentagon’s drone programme – Project Maven – which uses AI to analyse surveillance footage, will continue until the end of its contract in 2019. Recently, thousands of Google employees signed an open letter urging the company to cut ties with Project Maven, while about a dozen people even resigned over Google’s involvement. Google’s new principles also say that it will not work on AI surveillance projects that will violate “internationally accepted norms” or projects that contravene “widely accepted principles of international law and human rights.” The company’s main focuses for AI research are to be “socially beneficial” by avoiding unfair bias, remaining accountable to humans and subject to human control, upholding high standards of scientific excellence and incorporating privacy safeguards.