pull down to refresh

This is a fantastic and (as the title notes) cynical take.
And so you can't help but wonder if part of the equation in this settlement wasn't decidedly more cynical. Fresh off a new massive fundraise – one in which they raised far more than they were initially targeting, I might add – Anthropic has a lot of money. More than perhaps all but one of their competitors on the startup side. By settling for $1.5B, is Anthropic sort of pulling up a drawbridge, making it so that other startups can't possibly come into their castle? I mean, am I crazy?
I'm not so sure I am. At $1.5B, there are only a handful of companies that could afford to pay such fines. Certainly OpenAI is one. Maybe xAI. And of course all the tech giants like Apple, Amazon, Microsoft, Google, and Meta. But could any other startup that has done any level of model training with such data? Probably not.
I'd never even considered that the settlement could be an offensive weapon on Anthropic's part.
121 sats \ 1 reply \ @Scoresby 8h
Interesting strategy. It'd be a heck of a moat...
reply
102 sats \ 0 replies \ @optimism 5h
We'll just host the open models on the darkweb right next to libgen. moat gone.
reply
I hadn't considered this specific instance as an example of it, but I did always suspect that larger companies are more willing to support heavy handed regulations, because they are more equipped to deal with that than a small startup would.
reply
0 sats \ 0 replies \ @gmd 1h
What if you just distill from other LLMs and not train off of those texts directly...
reply