pull down to refresh

Sounds like this could be a waste of resource and money to me. More that they’re trying to construct a moat out of paper maché, in the open sea.
I sense OpenAI are going to continue to castrate their own models, by being ‘politically correct’, biased and closed. To do more posturing and scaremongering for authorities into pushing the narrative that Ai in the wrong hands or open source AI generally is a risk to society, to benefit their business. When it is the very thing that will empower billions in the next decade.

This is what the U.K. Prime Minister said the other day, prior to their AI summit next week: “And only nation states have the power and legitimacy to keep their people safe.”😂
OpenAI is playing along with that narrative.
It is AI exclusively in the hands of nation states that is the risk. Not that it can be in the hands of the people.
reply
It seems kind of foolish to not try to neuter models when you're under the microscope of regulators. Intent is legally relevant. For good or for bad, Sam appears cunning.
reply
Yes it is an inevitable reaction. I’d argue it’s needed for them to maintain their position in the market. To justify burning al that cash in recent years. They need regulation to survive, let alone just thrive.
Their compliance and alignment teams may long-term prove far too costly vs open solutions. Assuming they continue to exist. And that’s why they need authoritative walls to be built.
reply
I understand the point, it just kinda freaks me out. Also seems like they're desperate
"We've seen this thing do some weird shit :D help us prevent omnicide!"
reply
reply
Maybe it's just the AI posing as OpenAI and asking for ideas
reply
👀 😆
reply