20:14
15:47
10:28
12:41
14:21
15:06
20:14
15:47
10:28
12:41
14:21
15:06
20:14
15:47
10:28
12:41
14:21
15:06
20:14
15:47
10:28
12:41
14:21
15:06
Google has removed its AI policy commitment to avoiding harmful applications, including potential use in weaponry.
As reported by Bloomberg, the change was made in Google's AI Principles, which previously included a section titled "AI applications we will not pursue." This section explicitly stated a commitment to not developing technologies that cause or are likely to cause overall harm”, such as weapons or other dangerous uses.
In response, Google stated that democratic nations should lead AI development, guided by core values such as freedom, equality, and human rights. The company emphasized the need for collaboration between organizations, businesses, and governments to ensure AI enhances safety and supports national security.
Margaret Mitchell, former co-lead of Google’s Ethical AI team, criticized the removal, warning that it could shape Google's future direction:
Having that removed is erasing the work that so many people in the ethical AI space and the activist space as well had done at Google, and more problematically it means Google will probably now work on deploying technology directly that can kill people, said Mitchell, who now serves as Chief Ethics Scientist at AI startup Hugging Face.