pull down to refresh

I can adjust the weights of an LLM to only say evil things. Just like I can fill a database with only evil things, write a book or a website about evil things. The problem is that not enough time is spent on rethinking "alignment":
  • My personalized knowledge base should not know Harry Potter because I don't care about that shit. It should just fetch knowledge from the internet and tell me what's what.
  • My coding tool doesn't have to be polite, it should just do what it's told.
  • My summarization agent should not be PC, it should literally summarize what is in the article, and if that's offensive than so be it.
But, since most of the AI-as-a-service CEOs have an imaginary hardon the size of the Eiffel tower for AGI, they aren't thinking like that. They are faking-until-making AGI and it's likely they will fail no matter how much money they throw at it, because they haven't even realized the I yet.
Yeah, I know the media loves some crazy stories to get attention, and then you've got the marketing doing its thing. But like I said before: at the end of the day, it's still just an LLM.
reply