Chief Executive Officer of Google Sundar Pichai (pictured) announced on Thursday seven new principles that are supposed to guide the company in its future use of artificial intelligence (AI). “We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come,” Pichai stated. “As a leader in AI, we feel a deep responsibility to get this right.”
First among the principles revealed by Pichai is the objective of being “socially beneficial,” in other words that the “overall likely benefits substantially exceed the foreseeable risks and downsides” of using AI. Second is the principle of avoiding “creating or reinforcing unfair bias,” which is followed by the objective referred to as “be built and tested for safety.” Fourth, Pichai mentioned the principle of accountability to people, or providing “appropriate opportunities for feedback.” Next comes the principle of incorporating “privacy design principles,” while Google’s sixth objective is upholding “high standards of scientific excellence.” Lastly, Pichai said that his company will “be made available for uses that accord with these principles.”
Additionally, Google pledges not to design or deploy AI in “technologies that cause or are likely to cause overall harm,” including weapons and technologies used for surveillance, according to Pichai’s statement.