Advertisement|Remove ads.

The Trump administration has reportedly mandated that AI firms allow the government to use their models for lawful purposes.
The United States created new artificial intelligence guidelines in response to its fight with Anthropic, which required AI businesses to allow the government to use their models for authorized purposes, according to a Financial Times report.
The draft guidelines, reviewed by the Financial Times, prepared by the U.S. General Services Administration (GSA), explained that AI companies seeking government contracts must grant the U.S. an “irrevocable license” to use their systems. The guidance would apply to civilian contracts and strengthen the government’s approach to accessing AI services.
The draft also stated that contractors had to ensure that their AI systems did not intentionally have partisan or ideological judgments in their outputs. In addition, companies had to disclose whether their models had been modified or configured to comply with any non-U.S. government or commercial regulatory frameworks, the report said.
The developments followed a clash between the Department of War and Anthropic over how the company’s models could be used in military applications. The Pentagon, which is the headquarters of the US’s Defense Department, had sought operational access to Anthropic’s AI systems, but the company refused to provide complete access.
On Stocktwits, retail sentiment around Anthropic remained in the 'bearish' zone, accompanied by 'normal' chatter levels over the past day.

The Pentagon and AI company Anthropic are currently at odds over how the U.S. military can use Anthropic's AI system, Claude. Anthropic wouldn't take away the protections that keep its technology from being used for fully autonomous weapons or mass domestic surveillance, saying that these uses are unsafe and wrong. The Pentagon then canceled its contract with the company, citing it as a possible "supply chain risk," which meant the company couldn't work on defense projects anymore. Anthropic has said it will take legal action over this decision.
Following this, the Department of Defense formally designated Anthropic a “supply-chain risk” on Thursday.
Speaking on the All-In Podcast on Friday, Undersecretary of Defense for Research and Engineering Emil Michael said officials were “scared” that Anthropic could restrict access to its AI models during the national security crisis.
Michael said the dispute intensified when Anthropic CEO Dario Amodei suggested Pentagon officials could call the company for exceptions if certain uses of its AI systems were needed.
Michael said such an approach would be impractical during fast-moving military scenarios, including situations tied to President Donald Trump’s Golden Dome missile defense initiative. Amodei said in a statement that he may challenge the decision in court. Anthronic is the first American company to be named a “supply chain risk.
Read also: Novo Nordisk To Partner With Hims & Hers To Sell Weight-Loss Drugs: Report
For updates and corrections, email newsroom[at]stocktwits[dot]com.