uzani

weplay

Uncategorized

Pentagon and Anthropic Clash Over AI Guardrails in Military Systems

A high-stakes dispute is unfolding between the United States Department of Defense and AI company Anthropic over the use of artificial intelligence in classified military systems.

At the center of the debate is Claude, Anthropic’s flagship AI model, led by CEO Dario Amodei. According to reports, Claude is currently operating within Pentagon systems through a partnership with Palantir Technologies.

The controversy reportedly intensified after Claude was allegedly used during a U.S. operation targeting Venezuelan leader Nicolás Maduro. While details of its role remain unclear, AI systems like Claude are capable of analyzing intercepted communications, processing drone imagery, and identifying intelligence patterns.

Anthropic has built its reputation around AI safety and ethical guardrails, restricting certain uses such as mass surveillance of Americans, autonomous weapons, and specific targeting decisions. However, U.S. Defense Secretary Pete Hegseth reportedly issued a deadline demanding the company remove such limitations for military use.

The Pentagon could invoke the Defense Production Act or label Anthropic a supply chain risk if the company refuses to comply — a move that would be unprecedented against a leading American AI firm.

The dispute raises broader questions about who ultimately controls advanced AI systems in national security contexts.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *