pull down to refresh

I think part of the problem is that "AI safety" doesn't make much sense. Humans can't agree on what constitutes a good life or a good society, so it's almost a meaningless task to try and build "human values" into AI. The job must have felt like Sisyphus pushing the boulder up the hill only for it to roll back down.

Reminds me of this conversation with @k00b: #1009710

The alignment problem is undefined. See Arrow's Impossibility Theorem. If it's undefined, it's unsolvable.
in that sense I can understand why AI engineers don't care about alignment. I don't think I would either, if I were an AI engineer. Alignment to whose values, in the end?