You aren't in a bubble as time passes you get more inputs from the outside world
[..]
I feel like he's alluding to the like introduction rules of the typed lambda calculus but if it has data from the real world (that has structure even if its random) and it can learn that new type from the data or other people the same way a person would.
This is true if we break the current design of train-then-infer into continuous training during inference, downtime, all the things. But that needs a whole methodology change.
The closest approximation I guess I got to in a little brainfart experiment I ran was to use memory-mcp across multiple agents (of different models) where I system-prompted them to always finish a response by storing conclusions to memory, and "remembering" (=fetching) on-topic memories before answering a prompt. Doesn't work very well with small models (tooling hallucinations still suck in the open models) and the mcp graphdb isn't that great, but I want to at least see if it goes better with the mem0 mcp and maybe I should try this again with my on-demand large qwen3 I have scripting for on AWS, to see if it does better than mistral 24B.
Practically a good AGI would have analogs to something like emotion and get impatient and just decide.
mem0
mcp and maybe I should try this again with my on-demand large qwen3 I have scripting for on AWS, to see if it does better than mistral 24B.