Anthropic has publicly touted its focus on “safeguards,” seeking to limit the military use of its technology
The US military actively used Anthropic’s Claude AI model during the operation to capture Venezuelan President Nicolas Maduro last month, according to reports from Axios and the Wall Street Journal – revealing that the company’s technology played a direct role in the overseas raid.
Claude was utilized during the operation, not merely in preparatory phases, Axios and the WSJ both reported on Friday. The role remains unclear, though the military has used AI models before to analyze satellite imagery and intelligence in real-time.
The San Francisco-based AI lab’s usage policies explicitly prohibit its technology from being used to “facilitate violence, develop weapons or conduct surveillance.” No Americans lost their lives in the raid, but dozens of Venezuelan and Cuban soldiers and security personnel were killed on January 3.
“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesperson told Axios. “Any use of Claude – whether in the private sector or across government – is required to comply with our Usage Policies.”
Anthropic rivals OpenAI, Google, and Elon Musk’s xAI all have deals granting the Pentagon access to their models without many of the safeguards that apply to ordinary users. But only Claude is deployed, via partnership with Palantir Technologies, on the classified platforms used for the US military’s most sensitive work.
The revelation comes at an awkward moment for the company, which has spent recent weeks publicly stressing its commitment to AI safeguards and positioning itself as the safety-conscious alternative within the AI industry.
CEO Dario Amodei has warned of the existential dangers posed by the unconstrained use of AI. On Monday, the head of Anthropic’s Safeguards Research Team, Mrinank Sharma, abruptly resigned with a cryptic warning that “the world is in peril.” Days later, the company poured $20 million into a political advocacy group backing robust AI regulation.
Anthropic is reportedly negotiating with the Pentagon over whether to loosen restrictions on deploying AI for autonomous weapons targeting and domestic surveillance. The standoff has stalled a contract worth up to $200 million, with Secretary of War Pete Hegseth vowing not to use models that “won’t allow you to fight wars.”