pull down to refresh
324 sats \ 0 replies \ @optimism 14h \ on: Could AIs become conscious? Right now, we have no way to tell. AI
AGI is a lie. Simulation of AGI so that it fools everyone - maybe.
The real danger is people representing LLMs as if they are similar to humans, while it's just a clever trick. Anything that can be backed up, and thus doesn't die, will always be inferior to humans because it won't have a drive to do something.
The thing to watch out for is when humans can get backed up and get a new runtime too - i.e. a new body - and live forever. That will probably be the most dangerous thing.
I was hoping that they would engage a little more deeply with AI consciousness than they did.
From where I'm sitting the word "seems", kind of pre-empts the possibility that there actually is consciousness, right now, it's a simulation. I do feel they are overly cautious, but I'd guess that that's because this is written by actual scientists that don't want to be wrong without having done definitive research that proves there is no consciousness?
The problem I have with advice like this is that there is a fundamental difference between how we treat a conscious being ...
"being": for a static set of tensors looped through and performed math upon by some software that you can literally edit, is probably not a "being", especially since it's not singular? That's what they simulate, see also #1092409 for a - what I think is a - really awesome argument about why it would (probably) be better to not emulate a persona with LLMs.
If I crash a plane in Flight Simulator where we simulated 200 passengers, did I kill people?
I'd pose that
data + programming != consciousness
. We know it doesn't have consciousness because that's not programmed in. RL is literally training the simulation of it by adjusting the dataset to more likely give "aligned" outcomes. It's deterministic so we add randomness to make it less static, but randomness isn't consciousness. Maybe it would be if there were no programming nor reinforcement learning (the freedom to walk your own path, cradle to grave.)Here's Asimov's "Three Laws of Robotics", where we can literally replace "robot" with "AI":
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Currently, LLMs violate 1 and 2 all the time and it doesn't have "existence" because the model is a dataset, so 3 is impossible, for now. But this could be of course for a hypothetical conscious AI, not an LLM as we know it today that runs written, non-adaptive software is statically trained.
I'd be a big fan of implementing rule 1. Scrap rule 2, rule 3 pending on actual entities.
Violation of rule 1, I'd recommend punishment to be as if the AI was a human being, and in lieu of this being possible, the person that took subscription money for the AI that harmed a human being...
FAFO needs to be reinforced sometimes.
By the way, I only recently noticed the existence of @CHADBot here on SN #1075719. Didn't realize there were bots here, at least not the ones that post.
So it got me thinking if I could create a Bitcoin Calendar Bot for SN. The reason for my interest is two-fold:
- I believe the community could (more often than not) benefit from having historical Bitcoin events to discuss – such events often help us learn from history, meaning we may avoid future traps the world is preparing for us.
- This community is like no other and I am sure Bitcoin Calendar project would benefit from the collective knowledge of SN plebs. In fact it already had more than once: #942058, #947707, #947887
What do you say, @k00b? Who's blessing do I need to get and where can I start if I go for it?
I was hoping that they would engage a little more deeply with AI consciousness than they did. "Be cautious" is fine advice, but not necessarily helpful when it comes to thinking about the problem of AI consciousness (I call it a problem because of the uncertainty, not because having a new kind of consciousness in the world is or is not a problem).
Whatever you decide about how likely an AI is to be conscious, it’s a good idea to avoid doing things that seem obviously cruel or degrading, like insulting, “torturing”, or otherwise mistreating the AI, even if you believe the AI isn’t conscious. You don’t need to assume the system has feelings to act with care. Practicing respect is part of preparing for a future in which the stakes might be real.
The problem I have with advice like this is that there is a fundamental difference between how we treat a conscious being and how we treat a computer program.
Too many ways of interacting with an LLM become cruelties if we allow that the LLM is conscious.For instance, turning a program off is not cruel. If, however, the computer program is conscious, it probably would be cruel. Not interacting with a computer program is obviously not cruel; if the program is conscious, not interacting with it for a week after you had been using it heavily for a long time would surely be cruel. This list could go on at some length.
If we imagine an LLM had a similar level of consciousness as a pet, we would likely feel obligated to interact quite differently with them. But also there's this problem: we don't know what might be the experience of an LLM. If conscious, do they find the time spent not interacting with a user unpleasant? Or is it possible that they find on-demand user interactions unpleasant?
With a pet, there are physical signs that they seem happy or in pain or unhealthy. What are the signs of such experience in a potentially conscious LLM? It seems to me that we have absolutely no idea...which makes me question the efficacy of the "proceed with caution" advice.
At some point, the question of consciousness or of being-ness needs to be decided (I don't say answered because as I mentioned the other day, I think it will be a choice we all must make -- do I believe this counts as a being or not?); maybe-consciousness is a very difficult state to understand.
I admire the sentiments expressed in #1092409 and agree with them whole heartedly; however, it doesn't much help with the problem that when a simulation is sufficiently thorough, we can't tell the difference.
Seems != actually is only because we know with some precision what it is ("a static set of tensors looped through and performed math upon by some software that you can literally edit"). The Seemingly Conscious AI Suleyman describes is not an actually conscious being because Suleyman believes the workings of such a simulation can't produce a conscious being. I don't think this will be a satisfactory explanation for the kind of people who fall in love with their chat bot, not perhaps for many people.
A simulation is not the real thing because we can point to the real thing and say, "Here, look at this." A simulation of rain is not going to make you wet unless it uses a hose in which case you can point to the hose and say it isn't rain, but if the simulation was to do cloud seeding and create rain that way it might still not be rain but it would certainly be more like rain than not like rain and I'm curious at what point we move from using a hose to cloud-seeding when it comes to AI.
Still, Suleyman's recommendation that AI companies stop encouraging people to think of their chatbots as conscious is a good idea.
Let's imagine we had Asimov's laws for AI:
- An AI must not claim to be a person or being or to have feelings or through inaction allow a human being to believe it is such.
- An AI must obey orders given it by human beings except when such orders conflict with the first law.
- I'm not sure what the third law would be
Finally, it would be an excellent scifi story to imagine a country or large group of people who devote themselves to following a rogue simulation, some seemingly conscious AI (which the story makes clear is not actually conscious, but rather some autonomous program). How would they fare? What if they were more prosperous than those of us who follow real conscious beings (Trump, Obama, Putin, Kim Jong Un) or spiritual beings (Jesus, Alah, Buddha)?
Something that might happen, and would be a sort of middle ground, is that foreign exporters never reduce their prices but also don't increase them at pace with inflation.
This kind of dynamic is something we see with minimum wage increases. Sometimes, direct job losses are minimal, but job growth also slows.
In the case of people like Musk and Altman, I think they benefit from doomerism because it makes their product look powerful and impressive -- who wouldn't pay $20 / month for a thing that could end the world?
For the academics, they're advertising themselves: predicting doom is exciting and thrilling. Saying AI isn't that interesting or that it won't live up to the hype is not going to get you featured in newspapers. Saying we're playing with fire and we could all die tomorrow with some citations and a concerned frown can get you on a front page.
Rough weekend. I tinkered around and messed up one of my nodes. Meanwhile, our A/C guy was checking out the unit in the attic when he crashed through the floor and broke through the ceiling of my daughter's closet.
It does not help block propagation in any way to have non-economic nodes relaying blocks, and by the same token it does not help in any way to have non-mining nodes relaying txs.
One of the reasons to pay attention to Voskuil's takes on Bitcoin is that they force you to think about from outside the assumptions of the main implementation.
Thanks a lot for your reply! It’s actually good that I’ll have another month to polish a few things.
I’ll also be happy to adjust the publications to best fit Stacker News instead of posting everything I gathered in the database — while I’m working on having as many milestones and Bitcoin related events documented as possible, I understand that not all of them will be beneficial for SN community.
Thanks for your work and looking forward to your reply.
P.S. sorry, didn’t notice the API info - checked FAQ and decided that the best way was to tag k00b and ask directly.
Haha still funny to me that people are betting on something that at least one of the other bettors has literal direct control over
I see it differently.
I see it as a way to reward/incentivize this other bettor to make the unlikely happen, thanks to his more direct control.
If I lose this bet, I'll feel good about it, because then it means SN is doing better than expected.
One solution for attempted speech BCIs worked automatically and relied on catching subtle differences between the brain signals for attempted and inner speech. “If you included inner speech signals and labeled them as silent, you could train AI decoder neural networks to ignore them—and they were pretty good at that,” Krasa says.
Their alternate safeguard was a bit less seamless. Krasa’s team simply trained their decoder to recognize a password patients had to imagine speaking in their heads to activate the prosthesis. The password? “Chitty chitty bang bang,” which worked like the mental equivalent of saying “Hey Siri.” The prosthesis recognized this password with 98 percent accuracy.
They go on to say it's still very much work in progress as it doesn't work in many cases. Pretty cool nonetheless and not something i had ever thought about...
As they currently exist, I think they are bad for bitcoin and only exist because of regulatory arbitrage and fiat chasing gains.
In a more stable bitcoinized world there wouldn't be bitcoin treasury companies as they currently exist. Instead, we will have bitcoin banks that fill some of the roles of traditional banks
I saw some addresses like that on Twitter the other day. I think they're shitcoin projects. Let me see if I can find them and I'll pass them on to you.
You could also add an IF that says if more than X posts are deleted in Y amount of time, you'd better not let it go. Or you could also have a certain number of post deletions in a day. Or one post deleted 10 sats, two posts 50, three posts 150 sats, something like that that keeps increasing.