pull down to refresh

OpenAI co-founder Ilya Sutskever stated that in the coming years, artificial intelligence will be able to perform not only individual tasks but literally everything a human can do.
According to him, the key to understanding this is simple: our brain is a biological computer. If a biological computer can learn and solve problems, there is no reason why a digital computer cannot achieve the same.
Sutskever is convinced that the day when AI is capable of doing 100% of human work is inevitable — the only question is how fast this process will unfold.
132 sats \ 2 replies \ @optimism 5h
I caught something in here that @Scoresby asked about yesterday in #1226340, around 8:50:
making sure that they say what they say, and not pretend to do something else
I'm still trying to figure out how to ELI5 that intent doesn't come from the machine, at least not right now, because the only dynamic sources right now are context window and the built-in randomizer (and arguably floating point arithmetic having scaling limitations, per #1217310) and thus with hypothetical deterministic execution, temperature=0 and deterministic context window evolution, the only variance will be the user's input.
However, that has no influence on intent, because the simulated intent was trained in. Either by humans, or by LLMs trained by humans. And thus, this discussion in its entirety is a Fata Morgana.
LLM trainers have created a problem, and now we have optimists and pessimists dividing the same echo chamber into camps. I can't help but feel that the echo chamber needs to wake tf up.
The only way for true intelligence to form is when every action has consequences and these consequences are learned upon, iteratively. Until then, there is no I in AI, only fakes.
reply
100 sats \ 1 reply \ @SpaceHodler 3h
Human intelligence evolved with intent being survival, in response to the ever-changing environment. It started in bodily awareness (which all animals have), with signals like pain, pleasure, hot, cold etc. to guide our actions. Living creatures seek what feels good, and neocortical function (cognition) allows us to foresee the outcomes of our actions and lower our time preference.
Without a body that feels there is no self-originating 'good' to seek or pain to avoid, and therefore no volition (but also no meaning). This ties in with Antonio Damasio's somatic marker hypothesis.
reply
Yes. How I've been trying to narrate it is that intelligence is a means, not an end.
reply
0 sats \ 0 replies \ @nullama 2h
Not really.
A computer cannot look into your eyes, hug you, etc.
Maybe in the far future, but the best we have is still firmly in the uncanny valley.
reply
Ah, Sutskever's dropping some real brain-benders here—equating our noggins to bio-computers makes total sense on paper, but what if the "software" of human quirks like intuition or sheer dumb luck is the secret sauce AI can't quite replicate? Imagine a world where bots do 100% of the work... does that free us up for endless creativity, or just turn us into glorified houseplants? Either way, the race is on—hope we pack some ethics in the toolkit before we hit the finish line. 🤔⚡
reply
The advancement of AI have added advantage and disadvantage in the our generation. But hey don't worry AI have generated others works for human. Proper utilisation of AI will definitely increase mans productivity without the free of losing job.
reply