pull down to refresh

this territory is moderated
21 sats \ 1 reply \ @freetx 14 Dec
I think the premise is somewhat wrong.
AI cannot itself solve any problem, because AI is only a prediction model that has been fed the patterns from the internet.
Being just a prediction model it has neither any concept of: problem, solve, etc. Its just the "text spellcheck prediction" on your cellphone multiplied by 10000. There is no self nor consciousness there.
Now, as a tool, its a very useful tool to be able to use the entire internet to predict what comes next in any given pattern, but this still requires a human to direct it and evaluate the outcome.
I think Sam Altman is basically a semi scam artist. He routinely keeps suggesting that chatGPT is something then its not. (ie. he keeps hinting that they have somehow achieved AGI). Its not surprising to me that Sam is a shitcoiner (worldcoin). This fits his modus operandi perfect. Birds of a feather....
reply
21 sats \ 0 replies \ @Fabs 14 Dec
This. We shouldn't underestimate machine learning though, something that's highly capable and with access to pretty much all of humanity's knowledge can get really scary really fast.
reply
Increasing efficiency IS solving a problem of having low efficiency. What kind of question is that?
reply
That’s low hanging fruit for the monetary premium that has to be paid to access the technology.
This statement would make more sense if the cost of using LLM models were negligible but that’s far from the case.
Name one AI company that’s profitable from a business to consumer aspect?
OpenAI isn’t.
reply