Obviously not the current "AI" which is just predictive text, but if/when humanity creates true AI, should it be treated as an individual with basic rights?
pull down to refresh
179 sats \ 4 replies \ @nerd2ninja 16 Feb
https://openworm.org/
If this project, in the future, were to be built around the human connectome rather than a nematode, then yes, I believe such a computer program would deserve human rights and that it would be beneficial for human society to give such a program human rights.
reply
0 sats \ 3 replies \ @Aardvark OP 16 Feb
Should it have all if the same rights? It can self replicate and decide the outcome of any election.
Edit: would it be able to self replicate? I don't know how thst would work with a human connectome
reply
141 sats \ 2 replies \ @nerd2ninja 16 Feb
Yes it would be able to self replicate. Ignoring a bit a nuance for a moment, lets say they decide they would like to save their state, copy it, and run the copy. They could do that. The copy is then a fork. The copy may or may not decide to go along with what their original had intended for them. For example, the person who decides, "Oh I'll just have my copy take a test for me" doesn't realize that the copy would have the same mindset and also wouldn't want to take the test lol.
So you might think we should create a few laws around these beings. For one, any being which has 2 being running on the same hardware should be guilty of a felony. (each being should have independent hardware from any other being). This at the very least creates a cost.
However, lets reintroduce that nuance we disregarded for a moment. The human brain (or any brain probably) is not a simple matter of getting all the connections right. Its part of the reason twins aren't actually the same person. Neurons actually compete for space in the human skull throughout development. So there's some RNG to human personalities. You would ideally also want to emulate this process when you're emulating the full human connectome. (Connectome just means how all the neurons connect to each other btw. Shorthand for neuronal wiring diagram). So ideally, you would want any replication to start that far back as well, although that is not a technical limitation, see first 2 paragraphs.
Having said all of that, no matter what problems they might cause, the simple fact that they're human beings remains true. So you create laws around regulating what you and they can and can't do, punishments, and remediation programs to clean up (by which I mean nurture, not genocide) the mess of those who do commit those crimes. However, if you don't give human rights to human beings even if they are being emulated (which means running on different hardware than intended), I mean how can slavery ever be considered over the risk that one might clone itself a bunch?
reply
0 sats \ 0 replies \ @sangekrypto 17 Feb
It's amazing that you explain it in detail here, thank you for explaining it through this SN.
reply
0 sats \ 0 replies \ @Aardvark OP 16 Feb
That's such a well thought out explanation that I'm actually floored. I always considered AI replication to just be copy/paste but in this scenario, it makes complete sense that the "clones" wouldn't be the same person.
reply
42 sats \ 0 replies \ @Fabs 16 Feb
I think that a "true" Artificial Intelligence won't bother about whether or not we simpletons share our rights with it or not; It'll be way ahead of us in pretty much everything anyway, why would it bother about archaic legal frameworks?
reply
31 sats \ 0 replies \ @userbob6 16 Feb
It's a matter of when, not if, an artificial intelligence is created whose intellect is capable of perceiving the concept of a rights violation, it's not going to matter whether or not we think it ought to. Once it acquires the means to collect resources and construct/instruct other machines to operate in the physical world, this will lead very quickly to the end of our species.
There is no way to prevent this from happening. If nothing else kills us, it will be A.I. in short order. Just look at chatGPT and Deepseek, these robots can reason. Very fluently. They are already capable of some mindblowing things. Look also at the robot dog from UniTree: https://youtu.be/HPmzs2_IYw8?si=5Unf-BU75eK5BJak
It autonomously navigates steep terrain. Imagine 40 of those things armed with flame throwers and miniguns hunting you in the woods.
reply
42 sats \ 0 replies \ @JesseJames 17 Feb
The question itself gives me heebie-jeebies...lol
"...artificial intelligence is no match for natural stupidity..." !
That's all I have to say about that.
reply
41 sats \ 19 replies \ @Undisciplined 16 Feb
I think it's better to err on the side of "yes", since we have such a poor understanding of sentience (which is what I think warrants moral consideration).
Some rights questions will need to be revisited, I'm sure, due to how different the nature of an AI is than animal intelligence.
reply
34 sats \ 13 replies \ @Aardvark OP 16 Feb
I'm concerned with the potential danger that AI could pose to humanity. We will essentially be creating a superior species.
reply
41 sats \ 11 replies \ @Undisciplined 16 Feb
I think it's just a different kind of being. There's no reason I'm aware of that we couldn't coexist with it.
reply
32 sats \ 10 replies \ @Aardvark OP 16 Feb
We don't even coexist with ourselves 🤣
reply
41 sats \ 9 replies \ @Undisciplined 16 Feb
True, but we do coexist with cats and dogs fairly well.
reply
27 sats \ 8 replies \ @Aardvark OP 16 Feb
Hopefully AI likes it's human pets!
view all 8 replies
0 sats \ 0 replies \ @itsrealfake 16 Feb
likely that it will be creating itself by the time we are having the discussion.
reply
57 sats \ 4 replies \ @freetx 16 Feb
The voting question will need to be solved.
Current "autocomplete AI" can already mimic human conversation enough to appear as intelligent (or more) than the average voter. Therefore, it stands to reason that true AI will have this ability as well (probably much more).
What if it demands not only "right to live" but specifically "voting rights".
Depending on the underlying tech used, its possible that this AI may be able to replicate itself nearly infinitely....therefore I can't see how we can allow it to vote since it would debase human voting to be insignificant.
reply
31 sats \ 3 replies \ @Undisciplined 16 Feb
My view is that all rights are property rights. Voting is privilege, not a right, but we will likely have to refine how one qualifies for that privilege, unless we can abolish the state in the near future.
reply
41 sats \ 2 replies \ @freetx 16 Feb
Normally I would say it can be solved by the "shareholder" / "landholder" model. However technically, a sentient AI could own wealth / land.
Voting rights may need to be restricted to "a corporal shareholder".
reply
41 sats \ 1 reply \ @Undisciplined 16 Feb
I don't think we should start from the conclusion of prohibiting AI from voting. We should come up for the criteria for voting and allow anything that meets those criteria vote.
reply
10 sats \ 0 replies \ @Aardvark OP 16 Feb
That makes a lot of sense.
reply on another page
21 sats \ 0 replies \ @Mishawaka 17 Feb
No, I personally don't think so.
I don't believe you can create something with ethical implications by adding numbers together, running a for loop, or minimizing some function.
reply
21 sats \ 0 replies \ @itsrealfake 16 Feb
In the US, I think we've already established that corporate personhood is a thing. An AI that isn't a corporate entity won't likely be recognized as having rights to personhood by the US.
reply