10:55
10:27
10:16
12:38
11:51
17:01
10:55
10:27
10:16
12:38
11:51
17:01
10:55
10:27
10:16
12:38
11:51
17:01
10:55
10:27
10:16
12:38
11:51
17:01
Google, Microsoft, and xAI have agreed to provide the U.S. government with early access to their AI models ahead of public release, allowing officials to assess potential risks and capabilities, according to The Wall Street Journal.
The evaluation will be conducted by the Center for AI Standards and Innovation (CAISI) under the U.S. Department of Commerce. The agency has already carried out more than 40 tests, including on unreleased models. OpenAI and Anthropic signed similar agreements with the department back in 2024.
Under the arrangement, companies will give access to versions of their models with some or all safety safeguards removed, enabling deeper analysis of potential risks. The center was originally established in 2023 under former President Joe Biden as The U.S. AI Safety Institute (USAISI) and later renamed under Donald Trump, Bloomberg notes.
The timing of these agreements coincides with the release of Anthropic’s restricted Claude Mythos model, which has reportedly raised concerns among U.S. officials. According to the company, the model has already identified “thousands” of zero-day vulnerabilities.
Such cooperation between AI companies and U.S. authorities is not new. In early 2026, OpenAI signed a deal with the Department of Defense, and similar arrangements reportedly exist with Google, according to The Information. Anthropic declined such an agreement, seeking guarantees that its models would not be used for surveillance, while OpenAI says its deal includes those safeguards.

