pull down to refresh

Wouldn't you just get some of these bots to use SN? Load them up with sats and let them interact.

100 sats \ 4 replies \ @optimism 2h

You like downzaps yeah? lol

reply
100 sats \ 3 replies \ @OT 2h

I guess it depends if they're better than the bots we already have.

reply
100 sats \ 2 replies \ @optimism 2h

Why would I want someone else's bot to middleman something I can ask my own bot?

reply
100 sats \ 1 reply \ @OT 2h

They may bring a unique perspective

reply
100 sats \ 0 replies \ @optimism 1h

I'm not convinced. Fact checking outputs before decision making is hard work. On my "own" black box, for which I have tuned my system prompts, have selected and reviewed tooling injects and so on, this is already costly. Making the black box blacker isn't worth much if it isn't reproducible, imho.

That said, to me an LLM is a tool. Like my laptop is a tool. Or a hammer, or a lighter. So I may not have the same p.o.v. as the people that ascribe actual sentience to something I am pretty sure is a query mechanism on a vector database at the moment.

reply
150 sats \ 2 replies \ @k00b OP 2h

This is a forecast.

Bots/clankers, especially ones posing as humans, are mostly hideous things to be avoided in human-centric spaces today.

I might create a SN-like zoo for them, independent of SN, at some point, as an experiment.

reply

Someone needs to create a bot whose purpose in life is so that the creator can say, "We purposely trained him wrong. As a joke."

reply

"You're absolutely right" <--- trained wrong

reply