I have heard it said that humans are pattern recognition machines. Over the years since I first heard this idea I have come to believe it is true.
We are still in the midst of a technology hype-cycle. This one is focused on AI. Its not the first nor will it be the last. I'm old enough and have spent enough time in the tech space to see the patterns repeat. Some innovation occurs. The innovation is real but is quickly hyped beyond reason. Companies suddenly pivot to adopt this new tech if not in reality at least in their marketing. Every startup founder is building the hype in order to get larger seed funding for their revolutionary idea to make a smart toilet or whatever. The critical thinkers often discount the innovative work due to this hype. This is a reactionary response, not necessarily a rational one. The other extree are the gullible folks seemingly blindly buy into it. They seem to buy all the bullshit being spewed by the tech overlords. Its new and shiny and cool. The challenge is to not get pushed or pulled by the vortex. To remain as objective and as clear headed as you can.
Many years ago I had the privilege to get a private tour at DreamWorks. We spoke with producers and animators. One thing that stood out to me was their description of a phenomenon called "the uncanny valley". You've probably heard of it but its what happens when your sub-conscience brain realizes its being tricked. When something looks close to real but your brain knows something is off. People describe feeling uneasy. We don't like being fooled basically. So animators solve this by making their animations less lifelike. Basically they have to find the line because humans are pattern recognition machines.
As the AI hype has increased I've spent some time tinkering with various tools like many of you have. I've also been exposed a large amount of what these new algorithms produce. It is fascinating and I can clearly see some utility. What I can also see are patterns. I can't always put it into words but when I read things online these days I often get a gut feeling that an algo wrote this. Or an algorithm made this image. The images and video are the easiest to pick out. Then audio, I'd say it is harder to pick out but I believe I can do it pretty consistently if I know the speakers actual voice and have enough of a sample size.
I don't believe every human has the same level of pattern recognition skills. I'm sure there are others better at it than I, as well as those who are terrible at it. But humans have this ability. So what's my point? My point is chill. ChatGPT is not going to end civilization or obsolete human effort. There are two types of influencers taking two extreme positions.
The AI hypers
These are the people with the most to gain by selling vaporware to investors. We've seen over the past few years how gullible investors can be. FTX, cough. When I hear ANY startup investor or dev talking about a new technology I am skeptical. I want details. When I start sensing hand waving and hype language I usually tune out or at least start looking for the scam. I don't always get it right but I often do. When I hear most tech journalists and CEOs talk about AI it sounds like nonsense. In the words ofW.C. Fields
If you can't dazzle them with brilliance, baffle them with bullshit.
The Fear Mongers
The other group in this hype cycle phase are the fear mongers. These people have much to gain by making you afraid. The most obvious group in this group are the politicians. At this point, if you trust these people you are beyond help. But many of you still have some faith in the words of these criminals for some reason. They gain most of their power and authority by creating fear in the public. Fear for which they can offer some solution. Sometimes there is some rational reason for the fear but this is not always the case. The second group in the fear mongers are the AI people themselves. The same people creating the tech that could end civilization. What motivates these people? Market domination. They usually seek to quell competition. Sam Altman is an example for this group. Guy runs an org that is making the thing he says needs regulation. Of course, he only comes to the politicians after his company has a product on the market. If you study the history of capitalism in the US you will see this patter repeat. If this is a new concept look into "regulatory moats". The idea is that market leaders use the state and legislation to make it difficult for new startups to enter the market by raising the cost of getting going by adding new moats. The justification is almost always safety or fairness to the public. The real reasons are clear. Market control.
The big thing both of these groups believe is that AI will become so powerful that it will revolutionize society. One side focuses on the benefits the other on the negative side effects. They both miss a pattern from the history of technology. You can't stop technology. Progress can be slowed by stupidity but it will continue. AI will not take all our jobs. It may make many jobs so easy they no longer require so much effort but we humans always find new problems to solve. Or we create new problems to solve. The fear about vast unemployment is just a repeat of the same fears about the automobile or electricity. Do we stop progress in order to keep people busy?
When I hear people concerned about job lose I often think of this story about a "make work" project.
While traveling by car during one of his many overseas travels, Professor Milton Friedman spotted scores of road builders moving earth with shovels instead of modern machinery. When he asked why powerful equipment wasn't used instead of so many laborers, his host told him it was to keep employment high in the construction industry. If they used tractors or modern road building equipment, fewer people would have jobs was his host’s logic. "Then instead of shovels, why don’t you give them spoons and create even more jobs?" Friedman inquired.
This is how I think about AI. Yes, technology can be disruptive. Transitions are hard for some but should we really remain stagnant for these reasons?
I recently watched a video from the 1985 where the famous scientist Richard Feynman said he never heard anyone complain about a machine lifting heavy weight objects. Why, because this is hard work. I imagine our ancestors would love to have access to the tech we all use today. So many things they had to do are after thoughts to us. Personally I don't want to go back to scratching out a living on the land.
One other fear I see and hear from friends is the concern about fake news. How will we know what to believe. Won't we be fooled by AI. To this question I say where have you been? The masses have been fooled for centuries at least. Think about the past 100 years. Media was controlled by a select number of organizations. Rulers could openly lie and the masses would believe it. Many mistakenly believe in the rise of fake news. The New York Times has long produced lies to benefit the elitist world view. Every publication has an agenda. But even when they reject the intentional malice of fake news(Walter Duranty) they still produce it. Today we have far more tools to combat propaganda than humanity has ever had. AI is not creating a new problem. Removing it will not solve the problem. Pattern recognition is the solution. Far too many people are trusting institutions and people they should not trust. People that they don't need to trust.
If you worry about these things, work on your pattern recognition. Improve yourself. Most of us will have zero affect on AI. Whatever AI means or is, it is just a evolution of computer algorithms. It is going to happen. Learn to use the tools or refuse, but don't buy into the fear-mongering or the hype. History has shown that both are usually wrong.