pull down to refresh

What I learned is that computers fundamentally alter the economics of information. We now have inexpensive access to more information, and to higher quality information, than ever before. In theory, that should help individuals reach better decisions, organizations devise improved strategies, and governments craft superior policies. But that’s just a theory. Does it? The answer is “sometimes.” Unfortunately, the “sometimes not” part of the equation is now poised to unleash devastating consequences.
Reflection: among the highest achievers I've had the pleasure of working with in the past decade, many are relentlessly using just-in-time information facilities. People that often have a drive to find out combined with the persistence to not give up. Not just googling stuff; these are the people that find out what a competitor is bidding on an important RFP. Information is everything.
Few of today’s college students or recent grads have ever operated without the ability to scout ahead or query a device for information on an as-needed basis. There’s thus no reason for them to have ever developed the discipline or the practices that form the basis for learning. The deeper problem, however, is that while instant lookup may work well for facts, it’s deadly for comprehension and worse for moral thinking.
Reflection: among said high achievers, we often ran into issues with morality. "Can we get away with it?" was often the only question asked but not "is this the right thing to do?". The fast-lane has no time for thinking things through other than reaching and exceeding the agreed-upon objectives. In modern culture, these people are idolized: "move fast and break things", basically disruption, has become a virtue. Disruption is rarely the most ethical solution to a problem: there's often harm to others involved in cutthroat business.
New AI systems—still less than three years old—are rushing to fill that gap [of not being exposed to the reasoning behind outcomes]. They already offer explanations and projections, at times including the motives underlying given decisions. They are beginning to push into moral judgments. Of course, like all search and pattern-matching tools, these systems can only extrapolate from what they find. They thus tend to magnify whatever is popular. They’re also easy prey for some of the most basic cognitive biases. They tend to overweight the recent, the easily available, the widely repeated, and anything that confirms pre-conceived models.
Observation: This is still true for LLM today, at least on my shitty 2-months-old open source models. I had some issues finding a solution to a tech issue I had previously found and posted to both github and reddit and to save time, I thought to prompt an LLM with the problem description. It gave me a bunch of popular solutions that didn't work, but not my published solution. I then crawled my github history until I found it, manually. My great solution has no weight to modern LLMs, but the bs has.
We are rapidly entering a world in which widespread access to voluminous information is producing worse—not better—decisions and actions at all levels.
Agreed. So, stackers, how do we make better decisions, without becoming modern, obsolete Luddites?