pull down to refresh

I caught something in here that @Scoresby asked about yesterday in #1226340, around 8:50:
making sure that they say what they say, and not pretend to do something else
I'm still trying to figure out how to ELI5 that intent doesn't come from the machine, at least not right now, because the only dynamic sources right now are context window and the built-in randomizer (and arguably floating point arithmetic having scaling limitations, per #1217310) and thus with hypothetical deterministic execution, temperature=0 and deterministic context window evolution, the only variance will be the user's input.
However, that has no influence on intent, because the simulated intent was trained in. Either by humans, or by LLMs trained by humans. And thus, this discussion in its entirety is a Fata Morgana.
LLM trainers have created a problem, and now we have optimists and pessimists dividing the same echo chamber into camps. I can't help but feel that the echo chamber needs to wake tf up.
The only way for true intelligence to form is when every action has consequences and these consequences are learned upon, iteratively. Until then, there is no I in AI, only fakes.
110 sats \ 1 reply \ @SpaceHodler 4h
Human intelligence evolved with intent being survival, in response to the ever-changing environment. It started in bodily awareness (which all animals have), with signals like pain, pleasure, hot, cold etc. to guide our actions. Living creatures seek what feels good, and neocortical function (cognition) allows us to foresee the outcomes of our actions and lower our time preference.
Without a body that feels there is no self-originating 'good' to seek or pain to avoid, and therefore no volition (but also no meaning). This ties in with Antonio Damasio's somatic marker hypothesis.
reply
10 sats \ 0 replies \ @optimism 4h
Yes. How I've been trying to narrate it is that intelligence is a means, not an end.
reply