Federal Agencies Quietly Test Anthropic's Advanced AI Despite Trump Administration Ban
The Commerce Department’s Center for AI Standards and Innovation and other government officials are quietly evaluating Anthropic’s new AI hacking capabilities.
Several federal agencies are discreetly evaluating Anthropic's latest artificial intelligence model, even as the Trump administration has moved to restrict government use of the company's technology, according to sources familiar with the matter.
The Commerce Department's Center for AI Standards and Innovation is among the government bodies quietly assessing Anthropic's new AI hacking capabilities, raising questions about the consistency of the administration's stance on approved AI vendors.
Officials involved in the evaluations are said to be operating under research and standards-testing exemptions, allowing them to examine the capabilities of AI systems that have not received formal government approval for broader deployment.
Anthropic's latest model has drawn significant interest within national security and cybersecurity circles due to its advanced ability to identify software vulnerabilities and simulate cyberattacks — capabilities that government agencies are eager to understand and potentially counter.
The quiet assessments highlight a growing tension within the federal government between political directives favoring certain AI providers and the practical need for agencies to benchmark emerging technologies regardless of their vendor relationships.
Critics argue that conducting evaluations outside of official procurement channels lacks transparency and could create inconsistencies in how the administration enforces its own technology policies.
The Commerce Department has not publicly commented on the nature or scope of its AI evaluations, and Anthropic has declined to provide details about its government-facing engagements. The White House has also not addressed the apparent contradiction between the ban and the ongoing testing activities.