pull down to refresh

That is a bit of a loaded question.
So many people don't understand how LLMs work - or the state of AI in general - and this confusion is deliberately promulgated by large AI companies because it helps their investment outlook.
The current batch of AI is simply using the same "typing prediction" tech that your cellphone uses....its just that the models are 1000000x bigger.
My point on that is there is no generalized "intelligence" in those systems....other than the intelligence of the humans who generated the corpus of data it was trained against.
Its just predicting likely tokens (words) based upon its model.
The "problem" is that company are now connecting these systems together (using real humans) in ways that it acts as a pretty good simulacrum of actual intelligence.
However without the human generated corpus of data, and without the active continual human effort to string this all together, its just a computer with some fast chips inside of it.
I understand this is not exactly the question you're asking, and I do find these "simulated intelligences" useful, but it was a mistake for us to allow the developers of these systems to promote it as "AI" (we should've insisted on some other moniker), since it fools people into thinking its something that its not.
To answer your basic point though: IT, Programming, Law, Journalism, etc are all going to be transformed by this tech....but its going to come with lots of downsides...primarily the general creativity will plateau. That is every new app will be like every other new app.....every law argument will be like every other law argument.
In fact, this plateauing of creativity will accelerate as future AI models start to train on data that includes regurgitated AI content.
Yea, for true intelligence it won't be any time soon, or possibly ever. I do agree a different moniker would be helpful.
reply
As far as "general creativity will plateau"... we have already reached that point with journalism and law and medicine
students who graduate from schools of journalism or law or medicine are regurgitating machines. They don't think or have independent thought
reply
0 sats \ 0 replies \ @nym 25 Jan
deleted by author
However without the human generated corpus of data, and without the active continual human effort to string this all together, its just a computer with some fast chips inside of it.
I'm not so sure about that. I think the main innovation that they've shown is how to convert unstructured data (or difficult to define structures) to machine-usable forms using neutral nets. Then those neural nets can be "trained" quite generically using a system of carrots and sticks. I believe the computers will be able to optimize against those carrots and sticks even without new human generated corpi.
While I think a lot of it sounds like magic to folks, and a lot of the hype is overblown, I truly think there is something revolutionary about where we are with AI technology.
reply
I truly think there is something revolutionary about where we are with AI technology.
Agree
Then those neural nets can be "trained" quite generically using a system of carrots and sticks. I believe the computers will be able to optimize against those carrots and sticks even without new human generated corpi.
My point on "needing continual human effort" is that there is still a human (a team in fact) deciding deciding all that. The resulting AI are not deciding "hmmm....today I'm going to go learn how nuclear engineering works". They have no consciousness. No aesthetics. No desires.
I know you were not suggesting they did have any of those things, I'm simply making clear that without constant human direction, they are incapable of doing anything. They are a screwdriver or a buzzsaw.
The danger is that a naive public will believe that it has agency and ascribe to it all manner to deference.
reply