pull down to refresh

AI assistants don't have fixed personalities—just patterns of output guided by humans.
Recently, a woman slowed down a line at the post office, waving her phone at the clerk. ChatGPT told her there's a "price match promise" on the USPS website. No such promise exists. But she trusted what the AI "knows" more than the postal worker—as if she'd consulted an oracle rather than a statistical text generator accommodating her wishes.
This scene reveals a fundamental misunderstanding about AI chatbots. There is nothing inherently special, authoritative, or accurate about AI-generated outputs. Given a reasonably trained AI model, the accuracy of any large language model (LLM) response depends on how you guide the conversation. They are prediction machines that will produce whatever pattern best fits your question, regardless of whether that output corresponds to reality.
...
25 sats \ 0 replies \ @optimism 19h
as if she'd consulted an oracle rather than a statistical text generator accommodating her wishes.
AI is a great indicator of intelligence for the humans operating it.
reply