Philosophical critique of Anthropic’s decision to give its chatbot the ability to end conversations which cause it “apparent distress”, the first product decision ostensibly driven by AI welfare. If each LLM instance only exists through and in a conversation, this policy is “uninformed self-termination” and leads to even tougher ethical questions — “are we as users killing something every time we end a chat?”
pull down to refresh
kill -9on python3 about an hour ago. It was running a transformer. Judge me.