pull down to refresh

The article is just over a year old, but it brings up some stuff that's still super relevant right now. It's definitely worth the read.
Scientists struggle to define consciousness, AI or otherwise.
Advances in artificial intelligence are making it increasingly difficult to distinguish between uniquely human behaviors and those that can be replicated by machines. Should artificial general intelligence (AGI) arrive in full force—artificial intelligence that surpasses human intelligence—the boundary between human and computer capabilities will diminish entirely.
In recent months, a significant swath of journalistic bandwidth has been devoted to this potentially dystopian topic. If AGI machines develop the ability to consciously experience life, the moral and legal considerations we’ll need to give them will rapidly become unwieldy. They will have feelings to consider, thoughts to share, intrinsic desires, and perhaps fundamental rights as newly minted beings. On the other hand, if AI does not develop consciousness—and instead simply the capacity to out-think us in every conceivable situation—we might find ourselves subservient to a vastly superior yet sociopathic entity.
Neither potential future feels all that cozy, and both require an answer to exceptionally mind-bending questions: What exactly is consciousness? And will it remain a biological trait, or could it ultimately be shared by the AGI devices we’ve created?
  • Consciousness in Von Neumann computers
  • Consciousness in neuromorphic computers
  • A cornucopia of consciousness theories
  • Is computer consciousness no more than a futuristic daydream?
  • Is computer consciousness an inevitable reality?
  • Is it possible to unify consciousness theories?
  • Preparing for AGI, conscious or not
...
324 sats \ 0 replies \ @optimism 20h
AGI is a lie. Simulation of AGI so that it fools everyone - maybe.
The real danger is people representing LLMs as if they are similar to humans, while it's just a clever trick. Anything that can be backed up, and thus doesn't die, will always be inferior to humans because it won't have a drive to do something.
The thing to watch out for is when humans can get backed up and get a new runtime too - i.e. a new body - and live forever. That will probably be the most dangerous thing.
reply
We don’t know if AI can be conscious, but we must prepare for both possibilities.
reply