OpenAI – which millions of users trust with everything from legal documents to tax returns – is revising its newly signed contract with the US Department of War, just days after it was announced that they would replace Anthropic for use in government systems because the rushed rollout “looked opportunistic and sloppy.”
Hours after negotiations collapsed between the Pentagon and rival startup Anthropic on Friday, the San Francisco-based company agreed to supply its AI models for use in classified military operations. The breakdown followed talks with Defense Secretary Pete Hegseth over how the government could deploy advanced AI tools.
OpenAI initially described its agreement as containing “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” But on Monday, CEO Sam Altman said the company was working with the department to add explicit contractual language barring the intentional use of its systems for domestic surveillance of U.S. persons or nationals.
“The AI system shall not be intentionally used for domestic surveillance of US persons and nationals,” Altman said the revised terms would state, adding that intelligence agencies such as the National Security Agency would be excluded from the deal for now.
So – while OpenAI has likely bought some legal cover with these changes, there’s always the possibility of unintentional use.
From a Monday update to OpenAI’s statement on the deal:
Throughout our discussions, the Department made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance. To make our principles as clear as possible, we worked together to add additional language to our agreement.
This language makes explicit that our tools will not be used to conduct domestic surveillance of U.S. persons, including through the procurement or use of commercially acquired personal or identifiable information. The Department also affirmed that our services will not be used by Department of War intelligence agencies like the NSA. Any services to those agencies would require a new agreement.
The new language reads:
- Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.
- For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.
The Department of War plans to convene a working group made up of leaders from the frontier AI labs, cloud providers, and the Department’s policy and operational communities. OpenAI will participate and expect this will be an important forum for ongoing dialogue on emerging AI capabilities, privacy, and national security challenges going forward.
These updates build on the framework we announced last week and we hope will help create a pathway for other labs to work with the Department going forward.
* * *
Guardrails, Technical Controls and Legal Debate
OpenAI says it can uphold its own red lines through a mix of contractual provisions and technical controls. The company says it will deploy models via cloud access rather than installing them directly onto military hardware and will keep its personnel involved in the loop. It has reiterated that its technology cannot be used to direct autonomous weapons systems.
Altman suggested the company was comfortable relying in part on existing law. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he said Saturday.
But by Monday, he acknowledged concerns about how AI systems could enable large-scale data gathering.
“We shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication,” Altman wrote in a message to employees reposted on X. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
The updated language would “prohibit deliberate tracking, surveillance or monitoring of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information,” according to the company.
Fallout From Anthropic’s Collapse
The Pentagon’s pivot to OpenAI came after Anthropic’s negotiations unraveled over two core red lines articulated by its CEO, Dario Amodei: no domestic mass surveillance and no use of AI in lethal autonomous weapons systems – and would require the Pentagon to seek approval to use it in the heat of battle.
According to the Financial Times, Hegseth sought language permitting the models for “all lawful use.” Anthropic executives argued existing U.S. law could allow mass AI-enabled data collection and pressed for tighter contractual safeguards until new legislation was enacted. Discussions reportedly stalled over terms governing the mass collection of publicly available data.
The Pentagon had signaled openness to revising phrasing that Anthropic viewed as overly broad, and senior figures at the company believed a deal was close. But negotiations ultimately fell apart.
Since then, the Trump administration has moved aggressively against Anthropic. President Donald Trump has directed agencies to phase out the company’s tools. The Treasury Department, the Federal Housing Finance Agency, and government-backed mortgage giants Fannie Mae and Freddie Mac all announced they would end Anthropic contracts – with full dis-integration to occur within six months. The Pentagon also designated the company a supply chain risk.
Employee Dissent and Public Protest
The deal has triggered unrest inside OpenAI and across the broader tech sector. Employees have voiced concerns internally and on social media, according to people familiar with the matter. Nearly 900 workers at OpenAI and Google signed an open letter urging leadership to refuse government demands for domestic mass surveillance or autonomous killing capabilities.
Over the weekend, chalk graffiti appeared outside OpenAI’s San Francisco office reading “NO TO MASS SURVEILLANCE” and urging staff to “Do the right thing!”
The controversy has also spilled into the consumer market. Anthropic’s chatbot, Claude, briefly climbed above ChatGPT in Apple’s App Store rankings, according to Sensor Tower data, amid calls online for users to delete ChatGPT.
Miles Brundage, OpenAI’s former head of policy research, publicly criticized the company’s handling of the negotiations, writing that employees’ “default assumption” should be that OpenAI “caved + framed it as not caving,” though he acknowledged the organization is complex and that some staff worked toward what they considered a fair outcome.
Loading recommendations…
Read the full article here