I cannot help but feel that they're ascribing intent to a path randomizer, especially when looking at the code, which basically just asks an LLM to choose between options given, in natural language.
Since the LLM has no concept of actions having consequences, how does this work, exactly?
I cannot help but feel that they're ascribing intent to a path randomizer, especially when looking at the code, which basically just asks an LLM to choose between options given, in natural language.
Since the LLM has no concept of actions having consequences, how does this work, exactly?