Its a bold move for sure by Anthropic and it will be interesting to see if it ends up costing them juicy government contracts. The Admin also will be much much less likely to protect them or help them in any sort of future controversies. Its well within Anthropic's rights to do this however the consequences especially this early in the mainstream AI days is..... bold.
pull down to refresh
Hmm, I think they have the best AI though. Out of the ones I've used, anyway
Oh its my favorite one to use by far! However, if they get cut out of the DOE and DOW data stacks its highly likely that within those areas there capability will rapidly degrade.
Are you suggesting that the US government is the primary source of the capability of Claude?
Well the US Government does have the most scientific research in the world many times over. So for scientific discovery if they do not get access to that data than Claude will degrade over time.
With DOD activities yeah I would say that the DOD was the primary source of Claude DOD purposed capability.
These are both use case examples of course but they do lead to a cascading issue of OpenAI, Google, even xAI getting access to this data while Anthropic wont.
This is going to cost Anthropic $200 million on top of it a year with just DOD. If it goes into the other government departments and agencies then the cost is going to skyrocket for Anthropic
So you're saying that if right now I open Claude, the reason that it is capable of doing what I want is because of USG datasets? How about GLM? It is more capable than GPT-5.2. Made in China. Is this because GLM has access to the USG's datasets too?
Remember that "Build World-Class Scientific Datasets" was stated in future tense last July:
So you're claiming here with a straight face that between the release of Claude 4.0 and now, 4.6, this model is good because it was post-trained on a new dataset that didn't exist in July, made by the USG not industry?
So yes if the AI gets cut off from the cutting edge data sets it is going to degrade. If the AI isnt able to be trained on the highest quality data for specific tasks like what the DOE and DOD wanted to use it for then yes it will not be as high of a quality. The reason Claude works so well is because of the data. With the US Government now opening up these data bases to companies they have the ability to dramatically improve their models. If Anthropic doesnt get access they will fall behind.
The DOE data is entirely different than regular run of the mill scraped from the internet data. Our adversaries have not gotten ahead of us our National Labs continue to produce state of the art cutting edge experiments that no one else has the resources to be able to. China is trying to catch up in certain areas but we still maintain leads in fusion, fission, space, quantum, etc.
I think their stance is perfectly reasonable.
There is one thing though that I honestly do not understand what Anthropics rational is and that is they won’t allow the Department of Defense to use its models in all lawful use cases without limitation. Mass surveillance of civilians is illegal same with fully autonomous killing. It seems more and more like they wanted a random call out that was pointless and if anything looked bad on the Pentagon for not having the ability to carry out legal activities.
I mean it’s their business and they can do what they want but I don’t see investors willing to keep dumping money to burn when this is the PR and they are costing themselves hundreds of millions immediate
archived link
This move really highlights the tension between innovation, ethics, and government oversight. Anthropic is staking its principles, but it’ll be interesting to see if the long-term costs outweigh the short-term respect they gain.