pull down to refresh
200 sats \ 1 reply \ @Scoresby 13h \ parent \ on: When an AI Seems Conscious AI
I admire the sentiments expressed in #1092409 and agree with them whole heartedly; however, it doesn't much help with the problem that when a simulation is sufficiently thorough, we can't tell the difference.
Seems != actually is only because we know with some precision what it is ("a static set of tensors looped through and performed math upon by some software that you can literally edit"). The Seemingly Conscious AI Suleyman describes is not an actually conscious being because Suleyman believes the workings of such a simulation can't produce a conscious being. I don't think this will be a satisfactory explanation for the kind of people who fall in love with their chat bot, not perhaps for many people.
A simulation is not the real thing because we can point to the real thing and say, "Here, look at this." A simulation of rain is not going to make you wet unless it uses a hose in which case you can point to the hose and say it isn't rain, but if the simulation was to do cloud seeding and create rain that way it might still not be rain but it would certainly be more like rain than not like rain and I'm curious at what point we move from using a hose to cloud-seeding when it comes to AI.
Still, Suleyman's recommendation that AI companies stop encouraging people to think of their chatbots as conscious is a good idea.
Let's imagine we had Asimov's laws for AI:
- An AI must not claim to be a person or being or to have feelings or through inaction allow a human being to believe it is such.
- An AI must obey orders given it by human beings except when such orders conflict with the first law.
- I'm not sure what the third law would be
Finally, it would be an excellent scifi story to imagine a country or large group of people who devote themselves to following a rogue simulation, some seemingly conscious AI (which the story makes clear is not actually conscious, but rather some autonomous program). How would they fare? What if they were more prosperous than those of us who follow real conscious beings (Trump, Obama, Putin, Kim Jong Un) or spiritual beings (Jesus, Alah, Buddha)?
I'd pose that
data + programming != consciousness
. We know it doesn't have consciousness because that's not programmed in. RL is literally training the simulation of it by adjusting the dataset to more likely give "aligned" outcomes. It's deterministic so we add randomness to make it less static, but randomness isn't consciousness. Maybe it would be if there were no programming nor reinforcement learning (the freedom to walk your own path, cradle to grave.)Here's Asimov's "Three Laws of Robotics", where we can literally replace "robot" with "AI":
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Currently, LLMs violate 1 and 2 all the time and it doesn't have "existence" because the model is a dataset, so 3 is impossible, for now. But this could be of course for a hypothetical conscious AI, not an LLM as we know it today that runs written, non-adaptive software is statically trained.
I'd be a big fan of implementing rule 1. Scrap rule 2, rule 3 pending on actual entities.
Violation of rule 1, I'd recommend punishment to be as if the AI was a human being, and in lieu of this being possible, the person that took subscription money for the AI that harmed a human being...
FAFO needs to be reinforced sometimes.
reply