pull down to refresh

There are several predictions that it will happen in the 2020's. While I don't think it will achieve sentience any time soon, I do believe that AI will become smarter than humans. My guess is by 2029, mostly because I like Ray Kurzweil and think he's had some pretty decent predictions in the past.
I think the job market is going to change drastically, and I hope that AI becomes a tool that helps drive innovation. I don't think (hopefully) that tons of white collar jobs will get automated, they will just be different.
170 sats \ 5 replies \ @freetx 25 Jan
That is a bit of a loaded question.
So many people don't understand how LLMs work - or the state of AI in general - and this confusion is deliberately promulgated by large AI companies because it helps their investment outlook.
The current batch of AI is simply using the same "typing prediction" tech that your cellphone uses....its just that the models are 1000000x bigger.
My point on that is there is no generalized "intelligence" in those systems....other than the intelligence of the humans who generated the corpus of data it was trained against.
Its just predicting likely tokens (words) based upon its model.
The "problem" is that company are now connecting these systems together (using real humans) in ways that it acts as a pretty good simulacrum of actual intelligence.
However without the human generated corpus of data, and without the active continual human effort to string this all together, its just a computer with some fast chips inside of it.
I understand this is not exactly the question you're asking, and I do find these "simulated intelligences" useful, but it was a mistake for us to allow the developers of these systems to promote it as "AI" (we should've insisted on some other moniker), since it fools people into thinking its something that its not.
To answer your basic point though: IT, Programming, Law, Journalism, etc are all going to be transformed by this tech....but its going to come with lots of downsides...primarily the general creativity will plateau. That is every new app will be like every other new app.....every law argument will be like every other law argument.
In fact, this plateauing of creativity will accelerate as future AI models start to train on data that includes regurgitated AI content.
reply
Yea, for true intelligence it won't be any time soon, or possibly ever. I do agree a different moniker would be helpful.
reply
As far as "general creativity will plateau"... we have already reached that point with journalism and law and medicine
students who graduate from schools of journalism or law or medicine are regurgitating machines. They don't think or have independent thought
reply
0 sats \ 0 replies \ @nym 25 Jan
deleted by author
However without the human generated corpus of data, and without the active continual human effort to string this all together, its just a computer with some fast chips inside of it.
I'm not so sure about that. I think the main innovation that they've shown is how to convert unstructured data (or difficult to define structures) to machine-usable forms using neutral nets. Then those neural nets can be "trained" quite generically using a system of carrots and sticks. I believe the computers will be able to optimize against those carrots and sticks even without new human generated corpi.
While I think a lot of it sounds like magic to folks, and a lot of the hype is overblown, I truly think there is something revolutionary about where we are with AI technology.
reply
I truly think there is something revolutionary about where we are with AI technology.
Agree
Then those neural nets can be "trained" quite generically using a system of carrots and sticks. I believe the computers will be able to optimize against those carrots and sticks even without new human generated corpi.
My point on "needing continual human effort" is that there is still a human (a team in fact) deciding deciding all that. The resulting AI are not deciding "hmmm....today I'm going to go learn how nuclear engineering works". They have no consciousness. No aesthetics. No desires.
I know you were not suggesting they did have any of those things, I'm simply making clear that without constant human direction, they are incapable of doing anything. They are a screwdriver or a buzzsaw.
The danger is that a naive public will believe that it has agency and ascribe to it all manner to deference.
reply
Yes, Ray Kurzweil predicted the singularity by 2045.
I read a lot of blogs, but one of my most favorite ones are from Tim Urban. The guy would just change the way you think about everything. I would recommend you to read these two blogs by him in which he has discussed everything about ANI (current AI, also called weak AI), AGI (about to be achieved), ASI (still nobody's sure what capabilities it could have, so it is very hypothetical). But once we achieve ASI, it's said it could think on its own, and the AI world would have reached a point of singularity where their intelligence will increase way too much over time and could even create more of themselves or even advance. I mean, this is something more advanced than the human brain, and the scary thing is, from this point, AI might decide its own fate as they no longer need to rely on humans. But the ASI thing is very much hypothetical for now, so not completely sure.
Here's the blog :
Also, you should watch this Ted Talk by Ilya Sutskever (one of the leading brains in the AGI field).
reply
Not anytime soon. We are always great with new ideas (one of the best human features) but we suck with implementation. I could foresee wider spread and use for better agents (with actual knowledge) and robots to carry heavy items (in combat or otherwise) but to have human intelligence.? Naa man, not in our lifetime is my bet.
reply