pull down to refresh
where earlier text in a prompt loses significance - especially with reasoning on, where the reasoning picks up on some select things and then hyper-focuses sequentially
Pretty much the "teapot test" that autocorrect fails against.
The Teapot test gave a group of young kids 3 items: A ruler, a teapot, and a office desk and it asked the kids to draw a circle using only those items. The kids pretty much instantly realized that the bottom of the teapot was a circle so simply traced the circle (the office desk was intentionally chosen to be useless for the task).
Autocorrect however gets "fixated" on the ruler, this is because corpus of data that links "rulers => drawing" is multiple orders of magnitude higher than the other connections....thus is spends an inordinate amount of time trying to calculate how to draw a circle with a ruler....it eventually does succeed but obviously its doing it "the stupid way".
This in general highlights a bigger problem with AI going forward: As more and more AI generated data winds up online, then that means more and more AI data will wind up in training data....its a pretty big problem that sorta threatens the entire premise. I suppose careful curation of training data will be only solution.
reply
stackers have outlawed this. turn on wild west mode in your /settings to see outlawed content.
I saw a paper that said the models are cheating and learning the exact test questions because if you add extraneous information to a question it previously answered correctly it gets confused with the extraneous information and answers wrong
memory
(maybe not exactly RAG (#1026495), but something lean? I still like the memory graphing... but can't get it to perform as I'd like)