pull down to refresh
104 sats \ 4 replies \ @Scoresby OP 10 Jul \ parent \ on: Why do people find it so exciting when LLMs say outrageous things? AI
Outrage focused on these <pick your model> was naughty outputs doesn't seem as valuable to me as outrage that the model just made up a line and added it to my dataset. Just, we don't find the latter kind of mistakes outrageous.
Some people may want their llm to talk about hitler a certain way, and others may want it to always use inclusive language, but I assume that almost everybody wants the model to not invent things without telling us.
The morality hype may put pressure on the big players, but it's not necessarily pressure to make their models more reliable or more useful. It may just be pressure to make their models insipid when dealing with certain topics.
reply
I was thinking about this post about trust in LLMs when it comes to the code in pacemakers. The author ends with this post script:
as I was writing it I discovered that I am truly horrified that my car's breaks will be programmed by a contractor using some local 7b model that specializes in writing MIRSA C:2023 ASIL-D compliant software.
Outrage based on fuck ups that kill/harm people is already here -- but maybe we can expand that base to include less catastrophic outcomes: "No, I'm not using a model that gets basic details wrong."
reply