I don't worry about the AI thanks to having recently learned how the underlying mathematics works, it is related to data compression, prime number factors, similar stuff also used in encryption, signatures and error correction.
It analyses a big chunk of data, and breaks it down into its recurring parts and the patterns of recurring parts and recurring patterns, creating a dictionary, and then based on the sequences of the symbols and symbol patterns in the input data, it adds numbers to each symbol representing its ways of leading to another symbol.
The prompt is analysed, and it is converted into a set of parameters that pick out which of the plurality of options the paths allow for each next step in the document.
I'm not in favour of putting weapons in the hands of psychopaths but psychopaths have a way of getting them anyway. Sure there are patterns in the data that become visible with enough surveillance feeding the model, but idiot tyrants don't know how to ask the really good questions that impinge on their glorious self image and us plebs will have our own AI anyway.
I can understand not wanting to have the closed source, centralised and highly filtered language model raping our data here in front of us but open source and less filtered language models, and our own AIs are not their spying machines, I don't see why this should be a problem.
The genie isn't going to go back in the bottle. The text transformation technology will lead to great improvements in efficiency of many activities, today I was learning a little bit of mathematics and algorithms using it to write some code for me for an algorithm that I still haven't quite grasped (modular inverse). We need it to be open source models, but we should not deny ourselves the option of using this, or any other technology that has the ability to be used to violate the rights of others.
Only thing that stops bad guys with guns, is good guys with guns.
I don't see why this should be a problem.
People will use less and less their brain and knowledge will be passed to the AI. Your brain will not function as it should be, laziness will be a norm and AI words will be law. I reject to live such a dystopia.
reply
I don't think people will use their brain less because of AI, they just will not use their brain so often to do the exact same thing as someone else did already and encoded into a model.
The lower cost of producing what you will agree is inherently derivative, low information content variations on a theme will get even lower, and the increased supply will lead to a craving for novelty that will put the lions share of market payments for creative and technical work into the hands of the most creative, and the uncreative ones will go back to mowing lawns and laying bricks.
AI's primary impact on society will be the acceleration of the accretion of new knowledge, as researches will be able to move faster from a hunch to a new theory. It will have impact on software too. Where before it took 20 javascript ninjas to build an Amazon, it will now take like 2 especially creative javascript ninjas to build an Amazon.
AI is a compression algorithm. It is a means by which we can evaluate text more rapidly and confidently make judgements based on this as it relates to consensus knowledge.
We will see it used to rebel just as much as to oppress.
Many people are lazy, lack curiosity, and even are afraid of novelty because of luddist thinking, which you are demonstrating here by trying to say that AI is going to be net worse.
Unfortunately, like guns and nuclear weapons, pandora's box opening is a one way, trapdoor function. To dwell on the negative impacts of a technology without recognising that there's no making it disappear back into history again until it's proven useless, is a pointless exercise.
reply
I tend to agree. I'd also like to add that using GPT takes skill. It often gives you inaccurate responses, but you can verify them for yourself, and the added value here comes from the possible solutions to verify (kind of like the P vs NP problem: it's easier to verify a solution than to find one). People too often assume it's the same as asking a human expert; they expect answers to be server to them on a silver platter. But that's not how it works. Those who understand how those models work will gain a competitive edge; a new profession will emerge. It's already happening.
reply