pull down to refresh
0 sats \ 0 replies \ @brave 12h \ on: Why language models hallucinate - OpenAI AI
The idea of penalizing confident errors feels like a game changer for reducing hallucinations. Uncertainty can also be a win, uncertainty would be the little errors made along the way to the prefect output.