pull down to refresh

I wonder how repeatable these types of outcomes are. My experience with LLM's has been that they are non-deterministic by nature, so of course it will spit out some weird stuff randomly sometimes.
We really should stop treating these tools as being intelligent in any way. These are outputs based on probabilities, not anything that has been reasoned about.