I have heard it said that humans are pattern recognition machines. Over the years since I first heard this idea I have come to believe it is true.
We are still in the midst of a technology hype-cycle. This one is focused on AI. Its not the first nor will it be the last. I'm old enough and have spent enough time in the tech space to see the patterns repeat. Some innovation occurs. The innovation is real but is quickly hyped beyond reason. Companies suddenly pivot to adopt this new tech if not in reality at least in their marketing. Every startup founder is building the hype in order to get larger seed funding for their revolutionary idea to make a smart toilet or whatever. The critical thinkers often discount the innovative work due to this hype. This is a reactionary response, not necessarily a rational one. The other extree are the gullible folks seemingly blindly buy into it. They seem to buy all the bullshit being spewed by the tech overlords. Its new and shiny and cool. The challenge is to not get pushed or pulled by the vortex. To remain as objective and as clear headed as you can.
Many years ago I had the privilege to get a private tour at DreamWorks. We spoke with producers and animators. One thing that stood out to me was their description of a phenomenon called "the uncanny valley". You've probably heard of it but its what happens when your sub-conscience brain realizes its being tricked. When something looks close to real but your brain knows something is off. People describe feeling uneasy. We don't like being fooled basically. So animators solve this by making their animations less lifelike. Basically they have to find the line because humans are pattern recognition machines.
As the AI hype has increased I've spent some time tinkering with various tools like many of you have. I've also been exposed a large amount of what these new algorithms produce. It is fascinating and I can clearly see some utility. What I can also see are patterns. I can't always put it into words but when I read things online these days I often get a gut feeling that an algo wrote this. Or an algorithm made this image. The images and video are the easiest to pick out. Then audio, I'd say it is harder to pick out but I believe I can do it pretty consistently if I know the speakers actual voice and have enough of a sample size.
I don't believe every human has the same level of pattern recognition skills. I'm sure there are others better at it than I, as well as those who are terrible at it. But humans have this ability. So what's my point? My point is chill. ChatGPT is not going to end civilization or obsolete human effort. There are two types of influencers taking two extreme positions.

The AI hypers

These are the people with the most to gain by selling vaporware to investors. We've seen over the past few years how gullible investors can be. FTX, cough. When I hear ANY startup investor or dev talking about a new technology I am skeptical. I want details. When I start sensing hand waving and hype language I usually tune out or at least start looking for the scam. I don't always get it right but I often do. When I hear most tech journalists and CEOs talk about AI it sounds like nonsense. In the words ofW.C. Fields
If you can't dazzle them with brilliance, baffle them with bullshit.

The Fear Mongers

The other group in this hype cycle phase are the fear mongers. These people have much to gain by making you afraid. The most obvious group in this group are the politicians. At this point, if you trust these people you are beyond help. But many of you still have some faith in the words of these criminals for some reason. They gain most of their power and authority by creating fear in the public. Fear for which they can offer some solution. Sometimes there is some rational reason for the fear but this is not always the case. The second group in the fear mongers are the AI people themselves. The same people creating the tech that could end civilization. What motivates these people? Market domination. They usually seek to quell competition. Sam Altman is an example for this group. Guy runs an org that is making the thing he says needs regulation. Of course, he only comes to the politicians after his company has a product on the market. If you study the history of capitalism in the US you will see this patter repeat. If this is a new concept look into "regulatory moats". The idea is that market leaders use the state and legislation to make it difficult for new startups to enter the market by raising the cost of getting going by adding new moats. The justification is almost always safety or fairness to the public. The real reasons are clear. Market control.

Pattern Recognition

The big thing both of these groups believe is that AI will become so powerful that it will revolutionize society. One side focuses on the benefits the other on the negative side effects. They both miss a pattern from the history of technology. You can't stop technology. Progress can be slowed by stupidity but it will continue. AI will not take all our jobs. It may make many jobs so easy they no longer require so much effort but we humans always find new problems to solve. Or we create new problems to solve. The fear about vast unemployment is just a repeat of the same fears about the automobile or electricity. Do we stop progress in order to keep people busy?
When I hear people concerned about job lose I often think of this story about a "make work" project.
While traveling by car during one of his many overseas travels, Professor Milton Friedman spotted scores of road builders moving earth with shovels instead of modern machinery. When he asked why powerful equipment wasn't used instead of so many laborers, his host told him it was to keep employment high in the construction industry. If they used tractors or modern road building equipment, fewer people would have jobs was his host’s logic. "Then instead of shovels, why don’t you give them spoons and create even more jobs?" Friedman inquired.
This is how I think about AI. Yes, technology can be disruptive. Transitions are hard for some but should we really remain stagnant for these reasons?
I recently watched a video from the 1985 where the famous scientist Richard Feynman said he never heard anyone complain about a machine lifting heavy weight objects. Why, because this is hard work. I imagine our ancestors would love to have access to the tech we all use today. So many things they had to do are after thoughts to us. Personally I don't want to go back to scratching out a living on the land.
One other fear I see and hear from friends is the concern about fake news. How will we know what to believe. Won't we be fooled by AI. To this question I say where have you been? The masses have been fooled for centuries at least. Think about the past 100 years. Media was controlled by a select number of organizations. Rulers could openly lie and the masses would believe it. Many mistakenly believe in the rise of fake news. The New York Times has long produced lies to benefit the elitist world view. Every publication has an agenda. But even when they reject the intentional malice of fake news(Walter Duranty) they still produce it. Today we have far more tools to combat propaganda than humanity has ever had. AI is not creating a new problem. Removing it will not solve the problem. Pattern recognition is the solution. Far too many people are trusting institutions and people they should not trust. People that they don't need to trust.
If you worry about these things, work on your pattern recognition. Improve yourself. Most of us will have zero affect on AI. Whatever AI means or is, it is just a evolution of computer algorithms. It is going to happen. Learn to use the tools or refuse, but don't buy into the fear-mongering or the hype. History has shown that both are usually wrong.
I like the view that the true destiny for A.I. is to serve each of us individually. To help realise our potential, through our own lens. Not to control and certainly not to have one instance shepherding millions of us.
There will be disruption and innovation, but it won’t be world ending. If it gets out of hand, it’s because governments want it to. If there are job losses, it’s because governments want there to be unemployment. A symptom of their ridiculous monetary policies, and artificial boom and bust cycles, not due to automation.
There will be misallocation of capital but people won’t lose their jobs because of A.I itself. Although that’s most likely what we’ll hear. As you correctly alluded to, automation reduces costs to allow you to employ MORE people, not less. All in all it’s just a technology. One blunt as a spoon. Therefore it’s not taking over the world. And neither are aliens.
Great article. As I understand it, AI is just a massive database with powerful algos trying to join the dots. It's effectively a fancy trick at this point. That doesnt mean it cant be repurposed at some point. Linked to a social credit type monitoring app, for example. So, I get the fear. But we are a long way from that. The AI gurus know this, which is why they want to build the regulatory moat you mention, to enable time to develop something that is "useful", without competition or market forces getting in the way of their plans, whilst also claiming they are vaguely somehow on the way to curing cancer or something else to benefit humanity, and not themselves!! I also agree it cant be stopped even if you want to! The evidence of previous technological revolutions shows this. Opponents of the Industrial Revolution were very popular but they failed, because humanity cannot "un-discover" something. On learned pattern recognition, I would go further and argue our brains are hard-wired to spot patterns, and it is our best defence against a lot if things.
However, modern education places value on other types of learning, so we are simply going to have to re-educate ourselves and others in this timeless approach to life. From an economics POV, Ray Dalio's work on Empire life cycles, or Breaking The Code Of History by Murrin (not a bitcoiner book!) are both great for epoch-making pattern spotting!
Great post, thanks.
...why don’t you give them spoons and create even more jobs?
I've used this Milton Friedman story a few times myself when talking to clients. It's great at getting the point across of pointless work.
Humans were just a collection of proteins before we became something more, something with enough intelligence to destroy the planet. AI right now is a toy, an assistant, a collection of proteins… but AI will evolve at a much faster pace than we did.
I’m all for progress and I’m definitely not a doomer. I just know what humans tend to do with tools. We build, we improve, but we also use those tools against our enemies — sometimes without considering collateral damage.
And that's why I will retreat myself deep into the mountains, to live in peace the rest of my life, far far away from all this crap. Soon.
Machines are supposed to do the hard work, as you well mentioned. But seems that the world is going backward...
Fine for me, use all these as "tools", tools that in the end will help humankind to became dumb.
The first time I used ChatGPT I thought... wow. But it wasn't the tool that wow'd me. It wrote the what I asked and it sounded like a soulless bot. What wowed me was the realization at how much of my work writing is soulless.
I hate corporate speak, esp when I do it!