• btc = $66 048.00 -1 491.09 (-2.21 %)

  • eth = $3 452.43 -24.65 (-0.71 %)

  • ton = $6.91 -0.07 (-1.03 %)

  • btc = $66 048.00 -1 491.09 (-2.21 %)

  • eth = $3 452.43 -24.65 (-0.71 %)

  • ton = $6.91 -0.07 (-1.03 %)

6 Jun, 2024
1 min time to read

A group of current and former employees from leading AI companies OpenAI and Google DeepMind have published a letter warning about the dangers of advanced AI, alleging that companies prioritize financial gains over safety and transparency.

The letter, titled “A Right to Warn about Advanced Artificial Intelligence,” is signed by thirteen individuals, including eleven current and former OpenAI employees and two from Google DeepMind. Six signatories remain anonymous. The coalition argues that without proper regulation, AI systems pose significant risks, including misinformation, manipulation, entrenchment of inequalities, and even human extinction.

OpenAI spokesperson Lindsey Held responded, highlighting the company's commitment to safety and engagement with global stakeholders. Google DeepMind has not commented publicly.

The letter claims AI companies possess information about the risks of their technologies but are not required to disclose much to governments, leaving real capabilities secret. The group emphasizes the need for stronger whistleblower protections, as current ones focus on illegal activity, not unregulated risks.

The letter's release follows high-profile exits from OpenAI, including Chief Scientist Ilya Sutskever, who revealed non-disparagement agreements preventing employees from speaking negatively about the company.

The group demands AI companies stop enforcing non-criticism agreements, create anonymous channels for raising concerns, foster a culture of open criticism, and protect employees who disclose risk-related information.