pull down to refresh
i feel like there is a lot of hand waving going on with his formal symbols. A few criticisms I would make:
- Embodiment might be a requirement (could be virtual) but AGI needs to experience time so it can plan through it.
- Just because something doesn't exist doesn't stop you from approximating it (Q-function learning).
- You aren't in a bubble as time passes you get more inputs from the outside world and not just visual or audio data but information from the other agents in the society. Like maybe we learn how to be conscious from the other conscious people around us and we all just get better at approximating until everyone cant tell the difference but your really just replaying remixes of all the previous people you have ever seen. This would actually be a solution to his problem where he seems to say AI cant learn new types. I feel like hes alluding to the like introduction rules of the typed lambda calculus but if it has data from the real world (that has structure even if its random) and it can learn that new type from the data or other people the same way a person would.
- I also get vibes of like zeno's paradox where the algorithm is caught in an infinite regress because to consider one action to must consider all the possible outcomes but then it must consider all the outcomes of the outcomes. Practically a good AGI would have analogs to something like emotion and get impatient and just decide. Like if an llm isn't coming up with valid solutions it could turn its temperature up and start getting more erratic actions to take this might not solve the problem but it breaks the loop it was caught in and could be seen akin to a person loosing their temper and exploding. They had no solution to their problem as a calm individual so they changed the conditions and they solved the problem no matter how much they regret it later when rationally recollecting on the situation
- also more generally llm seem to be universal function approximators so I feel like whatever humans are doing can be created by stacking a bunch of layers of them together just like a fourier series that can approximate anything with a bunch of sin waves
reply
You aren't in a bubble as time passes you get more inputs from the outside world [..] I feel like he's alluding to the like introduction rules of the typed lambda calculus but if it has data from the real world (that has structure even if its random) and it can learn that new type from the data or other people the same way a person would.
This is true if we break the current design of train-then-infer into continuous training during inference, downtime, all the things. But that needs a whole methodology change.
The closest approximation I guess I got to in a little brainfart experiment I ran was to use memory-mcp across multiple agents (of different models) where I system-prompted them to always finish a response by storing conclusions to memory, and "remembering" (=fetching) on-topic memories before answering a prompt. Doesn't work very well with small models (tooling hallucinations still suck in the open models) and the mcp graphdb isn't that great, but I want to at least see if it goes better with the
mem0
mcp and maybe I should try this again with my on-demand large qwen3 I have scripting for on AWS, to see if it does better than mistral 24B.Practically a good AGI would have analogs to something like emotion and get impatient and just decide.
How would you approach that?
reply
Im reading that next
reply
Arrow's Impossibility Theorem
like the way this is reasoned (though lotsa maffs...)