11:39
13:16
09:59
14:15
10:28
09:59
11:39
13:16
09:59
14:15
10:28
09:59
11:39
13:16
09:59
14:15
10:28
09:59
11:39
13:16
09:59
14:15
10:28
09:59
OpenAI CEO Sam Altman announced that the company will provide the U.S. AI Safety Institute early access to its upcoming generative AI model for safety testing.
This collaboration aims to address risks in AI platforms and counter criticism that OpenAI prioritizes developing powerful AI technologies over safety.
In a post on X, Altman highlighted this partnership, which mirrors a similar agreement with the U.K.’s AI safety body made in June. This move comes after OpenAI disbanded its safety team, leading to the resignation of key members who now work on AI safety at other organizations.
In response to criticism, OpenAI pledged to dedicate 20% of its compute resources to safety research and eliminate non-disparagement clauses that discouraged whistleblowing. However, concerns persisted as OpenAI staffed its safety commission with insiders and reassigned a top safety executive.
Five U.S. senators recently questioned OpenAI’s policies, prompting a response from OpenAI’s chief strategy officer, Jason Kwon, affirming the company’s commitment to safety protocols.
The timing of this agreement coincides with OpenAI’s endorsement of the Future of Innovation Act, a Senate bill proposing the establishment of the AI Safety Institute as an executive body to set AI standards and guidelines. Critics suggest this could be seen as an attempt to influence AI policymaking.
Notably, Altman is a member of the U.S. Department of Homeland Security’s AI Safety and Security Board, which advises on the safe development and deployment of AI. OpenAI has also significantly increased its federal lobbying efforts, spending $800,000 in the first half of 2024 compared to $260,000 in 2023.
The U.S. AI Safety Institute, part of the Commerce Department’s National Institute of Standards and Technology, collaborates with companies including Anthropic, Google, Microsoft, Meta, Apple, Amazon, and Nvidia.