The rapid evolution of artificial intelligence (AI) has brought to the fore ethical questions that were once confined to the realms of science fiction: if AI systems could one day ‘think’ like humans, for example, would they also be able to have subjective experiences like humans? Would they experience suffering, and, if so, would humanity be equipped to properly care for them?
Some point out that failing to recognize that an AI system has become conscious could lead people to neglect it, harming it or causing it to suffer.
Some think that, at this stage, the idea that there is a need for AI welfare is laughable. Others are sceptical, but say it doesn’t hurt to start planning.
What do you think? Will AI ever become conscious? If it does, would you care for it or destroy it?