Anthropic’s AI model, Claude, was reportedly used by the US military in the barrage of strikes as the technology “shortens the kill chain” – meaning the process of target identification through to legal approval and strike launch.
Academics studying the field say AI is collapsing the planning time required for complex strikes – a phenomenon known as “decision compression”, which some fear could result in human military and legal experts merely rubber-stamping automated strike plans.
In 2024 the San Francisco-based Anthropic deployed its model across the US Department of War and other national security agencies to speed up war planning. Claude became part of a system developed by the war-tech company Palantir with the Pentagon to “dramatically improve intelligence analysis and enable officials in their decision-making processes”.
Wasn't there some charade recently by the CEO where he pretended to be against the use of Calude for military operations?
The objection is smaller than "against the use in military operations", see the rationale. I don't know how much sense the objection makes though, but not being a bunch of yolobois is making Anthropic look better than OpenAI at the moment (but I doubt that #1446625 is truly fueled by that, even though internationally having a spine, even if it's superficial, is a massive moat that Sam has completely lost)
Reputational loss is the only real damage you can do to an organization. But reputation is both fluid and subjective. What the "DoW" hates, is what the rest of the world may like, and vice versa. The weakest sauce is OpenAI right now by not having a spine at all.
pedovores riding asses
asses protecting pedovores
NPC's deep sleep