Public Broadcasting for Northwest Indiana & Chicagoland since 1987
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

OpenAI says it shares Anthropic's 'red lines' over military AI use

The Pentagon is seen from an airplane, Monday, Feb. 2, 2026, in Washington.
Julia Demaree Nikhinson
/
Associated Press
The Pentagon is seen from an airplane, Monday, Feb. 2, 2026, in Washington.

OpenAI CEO Sam Altman says he shares the "red lines" set by rival Anthropic restricting how the military uses AI models, amid Anthropic's escalating feud with the Pentagon.

The Department of Defense has given Anthropic a deadline of 5:01 p.m. ET today to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes."

Defense officials say if Anthropic doesn't comply, it could lose its contract worth as much as $200 million with the U.S. military.

The government has also threatened to invoke the Korean War-era Defense Production Act (DPA) to compel Anthropic to allow use of its tools and has, at the same time, warned it would label Anthropic a "supply chain risk," potentially blacklisting it from lucrative government contracts.

By wading into the standoff between Anthropic and the Pentagon, Altman could complicate the Pentagon's efforts to replace Anthropic if it follows through on its threat to cancel the contract. OpenAI also has a Defense Department contract, along with Google, xAI, and Anthropic, but Anthropic was the first to be cleared for use on classified systems.

"I don't personally think the Pentagon should be threatening DPA against these companies," Altman told CNBC in an interview on Friday morning. He said he thinks it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with."

"For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters," Altman added. "I'm not sure where this is going to go."

In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff.

The Defense Department didn't respond to a request for comment on Altman's statements.

Whether AI companies can set restrictions on how the government uses their technology has emerged as a major sticking point in recent months between Anthropic and the Trump administration.

On Thursday, Anthropic CEO Dario Amodei said the Pentagon's threats over its contract would not make the company budge. "We cannot in good conscience accede to their request," he wrote in a lengthy statement.

"Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he said, using the Pentagon's rebranded "Department of War" moniker. But, he added, domestic mass surveillance and fully autonomous weapons are uses that are "simply outside the bounds of what today's technology can safely and reliably do."

Emil Michael, the Pentagon's undersecretary for research and engineering, shot back in a post on X, accusing Amodei of lying and having a "God-complex."

"He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael wrote. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company," he wrote.

In an interview with CBS News, Michael said federal law and Pentagon policies already bar the use of AI for domestic mass surveillance and autonomous weapons."

"At some level, you have to trust your military to do the right thing," he said.

Independent experts say the standoff is highly unusual in the world of Pentagon contracting.

"This is different for sure," said Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, a Washington DC think tank. Pentagon contractors don't usually get to tell the Defense Department how their products and services can be used, he notes "because otherwise you'd be negotiating use cases for every contract, and that's not reasonable to expect."

At the same time, McGinn notes, artificial intelligence is a new and largely untested technology. "This is a very unusual, very public fight," he said. "I think it's reflective of the nature of AI."

Copyright 2026 NPR

Shannon Bond is a business correspondent at NPR, covering technology and how Silicon Valley's biggest companies are transforming how we live, work and communicate.
Geoff Brumfiel works as a senior editor and correspondent on NPR's science desk. His editing duties include science and space, while his reporting focuses on the intersection of science and national security.