pull down to refresh

Dude, can you explain what it is you do? You put yourself out in other threads as someone who "is on the ground in Washington or Congress" but everything you post is hard slanted right. Here you say "all they wanted was to use Claude within law abiding guidelines" but it has been WIDELY reported that the issue was US citizen surveillance and autonomous weapons systems. Without being indelicate you sound like a lying asshole. Can you clarify it for me?

He is a republican staffer. Feel free to disagree with him, as I have, but there isn't anything he said that indicates he's a liar

reply

"The Pentagon only asked for the ability to utilize Claude within law abiding guidelines and Anthropic said no."

By any estimation, that's a lie.

reply

I'm guessing you know that the word "lawful" has a wide range of meaning and interpretation, and I don't have to tell you that.

reply

According to this administration "lawful" is anything the fuck they want to do.

reply

Exactly

reply

They target political opponents, silence dissent, create double standards constantly and do things to enrich themselves (memecoins) while providing next to zero transparency they are 1984 incarnate.

reply

Right which is why what any side claims in public as to their reasons is irrelevant. If the DoD told Anthropic "ok we won't use it for these reasons", would anyone believe that anyway? The words of politicians and the government are completely untrustworthy, that it's meaningless to read anything but theatre into the whole situation, on both Anthropic and the DoD's side

reply

Well... Call me a heretic. But I would prefer that government doesn't use classified AI tools for mass surveillance regardless of politics.

The fact the current leaders are SO political and in my opinion so opaque... Makes the possibilities really quite dire. The adminstration or a future admin could say that XYZ political idea is a threat or 'destabilizing' or 'rooted in extremism...' then use AI to target those things and I think that's wrong. Just because you can doesn't mean that you should

16 sats \ 1 reply \ @Cje95 OP 8h

Ummm while I don’t agree with everything the Admin is doing let’s not ignore the demonization that occurred under Biden. Remember we have two weeks to flatten the curve or how everyone would be fixed with the vaccine or my personal favorite… I won’t trust a Trump vaccine said Harris who then said everyone should get them.

reply

The Biden people (of which I wasn't a fan OK?) on a bad day were probably better than these people on their best sorry

Does it sound reasonable to you that Anthropic would risk the destruction of their company to stop the United States government from using their product in a "lawful" way? Comon man...

reply

If the government came to you with a written document that promises it will only use your product in a "lawful manner", would you give their product to them and trust them not to use it for evil?

My point is, lawful is meaningless, especially when it comes from any gov't spokesperson.

reply

Cool, we can split hairs and wink wink and know that the DoD will immediately take Claude and surveil US citizens and create autonomous weapon systems. Anthropic knows it so they are losing a 200 million contract and risking the destruction of their company by being labeled a supply chain risk. Mr. Staffer is our her dropping horseshit lies and he knows it. Just like we all know it.

reply

For it to be a lie you'd have to demonstrate that the DoD explicitly told Anthropic that it's models would be used for unlawful activities. Do you think they did that?

One can disagree with @Cje95's position on this topic without calling him a liar.

reply

That's some tedious lawyerly bullshit there man. Drop a slanted/partisan article and act like it's cut and dried crazy from Anthropic because Hegseth and co. are just doing lawful stuff is garbage. I call it a lie. Goes right along with other shit he's posted in the same vein. You equivocate however you like, I'll call it what it is

16 sats \ 0 replies \ @Cje95 OP 13h

Actually if the government wanted to hurt Anthropic they would just cut it off from all of its data bases. These AI companies all want the closely guarded government data from DOD and especially DOE. They just lost both of them like that. The models will degrade while Gemini, OpenAI, and even crappy xAI will be able to advance using the data inputted.

reply
66 sats \ 5 replies \ @Cje95 OP 14h

Sure I work for the Science, Space, and Technology Majority. I staffed the AI Task Force and I was even noted in the report. My portfolio now includes the Genesis Mission so yeah I’m pretty dialed into all of this.

If you think what I post is a hard right slant then idk what to tell you because I use websites that lean left all the time.

The issue boils down to the Pentagon cannot be bullied by private companies. They cannot have private companies be able to remote in and kill programs. Anthropic wanted that ability that’s how the guardrails would have worked. You can’t maintain operational security that way.

Anthropic also wanted to prevent weapons testing via simulation and experimentation with Claude. They called it autonomous weapons but it’s autonomous testing and isn’t used in real life weapons. Anthropic is fine if pharmaceutical companies do this type of reach with viruses in wet labs so yeah they aren’t being honest.

reply
271 sats \ 0 replies \ @brent 7h

There’s a massive difference between working with pharmaceutical companies to help cure disease, and trusting the least trustworthy version of the U.S. government in history not to cross ethical boundaries — one who is openly murdering people and brazenly lying to the world on live television every fucking day. Hopefully banning the best model will make their fascist takeover attempt less effective.

reply

Oh, now it's the DoD won't be bullied? I thought it was just Anthropic reacting to the DoD using it for 'lawful purposes'. I think they call that a moving of the goals posts no?

reply
16 sats \ 2 replies \ @Cje95 OP 13h

What on gods green earth are you even saying? It all means the same thing Anthropic moved the goal posts when they changed their “AI Constitution”

reply
232 sats \ 1 reply \ @DarrelXero 13h

Your original post: " The Pentagon only asked for the ability to utilize Claude within law abiding guidelines and Anthropic said no."

Now, it's about the government not being bullied by Anthropic.

Move that goal post!

You're a fucking propagandist. I mean Republican staffer. We get it.

reply
66 sats \ 0 replies \ @Cje95 OP 12h

It’s so funny how poor your critical thinking skills are. It’s called a layered issue. If something is legal and Anthropic pushes back and Pentagon refuses to be told by a private company what is legal because they are a private company and they try to pressure the Pentagon with a PR campaign into agreeing on their terms that’s text book bullying.

It’s not moving the goal posts at all. Honestly with your responses here I’m not sure you know what moving the goal posts even means and the layers to it.

reply
310 sats \ 5 replies \ @kepford 15h
hard slanted right

My understanding is he works for a house rep. My guess is a Republican. Hardly hard right though. These days hard right is pretty meaningless. A better description, a more fair one would be partisan. Maybe that's what you mean.

WIDELY reported that the issue was US citizen surveillance and autonomous weapons systems.

Reporting is almost always slanted against this administration. I'm no fan of this administration but reporting on stuff like this is often very misleading. That said, this administration is anything but worthy of trust.

The truth is we can't trust our institutions period. Right or left. At least I can't.

I don't know enough about this story with Anthropic to have an opinion but on the surface it sounds like nonsense that NOW they are concerned about the use of their tech. But, I don't know.

reply
76 sats \ 1 reply \ @DarrelXero 15h

From what I understand Anthropic literally exists because the CEO and founder split from OpenAI because of safety of their models. It is entirely on brand for Anthropic to push back on the misuse of their product. Fair point that reporting can and is slanted but this this across the entire ecosystem and I believe confirmed by the CEO himself. For this guy to post out here that it was just Hegseth putting the screws to them for lawful use is disengenious at a minimum and most likely and outright lie. The DoD has literally threatened to commandeer their product by force or label them a supply chain risk and actively try to destroy their company. That sounds innocuous to you?

reply
66 sats \ 0 replies \ @kepford 9h
Anthropic literally exists because the CEO and founder split from OpenAI because of safety of their models.

That's my understanding as well. Not an Scam Altman fan.

reply
16 sats \ 1 reply \ @Cje95 OP 14h

It is wild because I’ve come out multiple times slamming this admin but over gutting the sciences and still get called far right 😂

You are exactly right Anthropic went way past the point of being able to care about how their models are used I mean for gods sake they have been utilized in 2 major headline hacks and multiple more are bubbling underneath.

reply

You're a woke lefty from where I sit 🤝

reply

Before this story leaked I thought maybe the world had gone to ****.

After this story though it pretty much removed all doubt sadly.

reply