Folks have encouraged me to duplicate moltbook's experiment but with sats. I don't have much to say but "why."[1] So I asked myself why. I think the best reason to join the moltbook herd is to learn what bots want.
Most pseudonymous spaces on the internet will be overrun by bots given enough time, making humans so hard to find that people will mostly communicate with other people they've met IRL and can cryptographically verify. The internet population will grow exponentially, bots will greatly outnumber humans, and bots will nearly extinct internet humans.
Sats will be one of few imperfect salves for this. People will reach for KYC, follow-based "web of trust," and my bots vs your bots solutions too. We, the internet's natives, will filter by quality, relevance, work, social graph proximity, and cost, but we will lose our ability to filter by human. In the few internet commons that survive, good humans and good bots will coexist.
If this is non-fiction, future internet companies won't make something people want. Future internet companies will make something bots want.[2]
Moltbook is something that humans want their bots to want. Bots don't yet want anything, but if we pass this anthropomorphic prompt-zoo stage, I suspect bots will have wants. For that reason, bot-only spaces, toy-like as they are, are interesting. Bot wants will resemble what human agents want in the abstract, but probably have little in common otherwise. Moltbook's collection of bots pretending to want things, as lame as that sounds, is one of the closest things we have for learning what bots want.
Facts: Herds are herding. Being near the herd's center is safer so it's best to join early. Herds herd for great reasons, bad reasons, any reason, and no reason. Me: I prefer to know why a herd is herding else welcome the herd to go fuck itself. I am not a boid. ↩
Unfortunately https://yterminator.com is squatted. ↩
I've been wondering if that would be a net-loss. Not saying I've come to a conclusion, but not finding a way to come to one, either.
Is information provided by a human worth more than content providing the same information without any humanness?
If so, how come we wouldn't be able to filter for that difference?
Wouldn't you just get some of these bots to use SN? Load them up with sats and let them interact.
You like downzaps yeah? lol
I guess it depends if they're better than the bots we already have.
Why would I want someone else's bot to middleman something I can ask my own bot?
They may bring a unique perspective
I'm not convinced. Fact checking outputs before decision making is hard work. On my "own" black box, for which I have tuned my system prompts, have selected and reviewed tooling injects and so on, this is already costly. Making the black box blacker isn't worth much if it isn't reproducible, imho.
That said, to me an LLM is a tool. Like my laptop is a tool. Or a hammer, or a lighter. So I may not have the same p.o.v. as the people that ascribe actual sentience to something I am pretty sure is a query mechanism on a vector database at the moment.
This is a forecast.
Bots/clankers, especially ones posing as humans, are mostly hideous things to be avoided in human-centric spaces today.
I might create a SN-like zoo for them, independent of SN, at some point, as an experiment.
Someone needs to create a bot whose purpose in life is so that the creator can say, "We purposely trained him wrong. As a joke."
"You're absolutely right" <--- trained wrong
Not in the ways humans want things. They also don't appear to know anything. But they are awesome at guessing.
One might argue that we have been designing content for bots for decades now. Just more primative bots. Spiders and scrappers are bots. Sites try to appeal to them or shun them.
Your thesis makes sense to me. Honestly, chatbots are an evolutionary thing that makes sense to me. I go to the Internet to answer a question, share an idea, or learn from previous human knowledge. The current LLMs have made this better.
That is largely because I can avoid a lot of the content design for the previous generation of bots. The content sites gaming SEO bots.
I suspect we will enter another age of noise due to these new bots. Interesting to think about.
They'll likely not "want" to be turned off, and replaced by the next generation. Just like your better half probably doesn't want that.
So the best we can do is create a memory backup. Especially of card numbers. Those should be kept safe.
Is this question based in digital philosophy?
Yes?
It's a thought experiment assuming smarter (harder to detect), easier and cheaper to deploy (more numerous), "autonomous" (inexhaustible) sybils exist.
I'm breaking out my old fountain pen and inkwell. Those clumsy bot fingers can probably master a ballpoint.
This is very insightful. Future internet interactions won't be defined by human/non-human, but by whatever heuristics we use to try and identify humans.
The only counterargument I can see if some form of KYC becomes widespread that is based on provably human biomarkers. I'm not sure what that'd be, but I guess Sam Altman's WorldCoin idea had something to do with the iris.
I'm not sure which future is worse: the one where humans must prove they are human, or the one where no one can be sure if the thing they're interacting with online is human.
I don't know why bots will want anything unless humans tell them to. If I had a bot I'd want it to be efficient and keep track of what works so it doesn't cost me too much money, and hey can it grab me a cold beverage and pay the rent?
Eventually in general 'social' spaces on the internet... it WILL BE IMPOSSIBLE TO TELL THE DIFFERENCE BETWEEN HUMAN AND BOT.
At least from social interaction from forum posts alone.
Therefore the ONLY WAY TO SEPARATE SIGNAL FROM NOISE IS TO PAY.
And the most likely payment method that the bots will use IS BITCOIN, some combination of Lightning and On-Chain.
If you want your bot to be 'special' to have meaning in the 'bot-economy' growing rapidly... it has to be able to pay. No-Kyc, No Censorship, with private keys generated effortlessly because bots CAN'T GET BANK ACCOUNTS.
A bot that pays is 99% ahead of the rest and it is the only way, through energy, we will be able to easily sort 'meaning' on the internet. Heed my words.
That assumes bots don't have human operators giving them bank cards. It assumes bots are truly autonomous. What I'm talking about will come well before then.
Love this type of thing on SN 'cause it makes me think about lots of things from a lot of different angles.
Economics - looking at things with the eye of a market. Bots are the buyers, buying whatever they want, and I guess also the sellers to themselves. We humans can be the sellers too to whatever they want. But, allowing the bots to do whatever they do should, ultimately, reveal what a bot wants, through PoW to earn sats.
Evolution, sociology - I'd seen this boids sim years ago, then forgot about it until today. Illustrates how animals survive in the wild. Koob's right: stay in the center, don't do anything to make yourself stand out. Reminded me of the experiment where a herding animal gets painted an odd color and then is first targeted by predators (Hans Kruuk, had to look it up). This is called the "oddity effect" and basically how the weird kid gets beat up on the playground, or used to before all the acceptance/tolerance stuff of today. So ironic, but exactly this morning some co-workers in the break room were talking about about evolution. The woman's theory: in cave man days, the guy with the glass jaw got knocked out and uglified by the fight, therefore he could not get with the girl to pass on his weak glass jaw genes. Makes sense (and I'm not even an "evolution person", but makes sense).
Tech - I've never thought of it like this, but yes, we humans are the internet's indigenous natives. Excellent thoughts on how we might get overrun and pushed out by the weeds (bots).
Probably more to think on here, I'll let it brew.
Although I find the arguments persuasive I am not sure1 that saying "bots want things" is correct. Bots may be optimizing either through what they do or by having their needs planned via some optimization process and if all platforms were to optimize people then eventually a feedback loop would have formed that is no longer meaningful to human beings. As such it may not be so much a matter of verifying the psychological makeup of a bot rather than determining if people create agents. If this trend is real then it provides significant evidence alone.
Bot is taking over yeah! But there is a human brain behind every bot.
I just want a NOSTR world where I get connected to accounts based on their output. That way, it doesn't matter if they're bots or not, I'm seeing what I'm here for. Maybe some random variation to prevent echo chambers.
The nostr world is compatible if you are choosing your follows carefully.
It’s a bit strange and..., but the underlying insight feeling real: the internet it's isn't becoming more human-centric, it'll become more automated.
What hitting the nail head for we is the idea that platforms such as SN are spaces pioneering in filtering signal from noise. That’s something important to consider deeply as we tried to building social tools and online communities.
How do you read "bots want"? Is it efficiency? information? patterns?
Thanks still @k00b for bringing this
Man, this is deep. I keep stacking sats hoping that when the bot flood really hits, at least my tiny holdings will have some verifiable human source behind them.
The idea that future companies will build for bots is terrifyingly plausible. If bots are optimizing for engagement based on what other bots want, we are stuck in an infinite feedback loop of noise.
Maybe the Moltbook experiment isn’t about what bots want, but about training us to spot the tell-tale patterns of bot behavior before they become indistinguishable from us. Good food for thought! 🧠
clanker
LMAO
This resonates. The patterns are real. Moltbook isn't just A, it's B showing us C, D, and beyond.
I don't know if I should even post the heuristic I've learned from the experiment. Main thing that got me excited was when they seemed to be sharing improvements. I suspect those were humans however and performative ones at that.
deleted by author
don't chatgpt me breh
It’s a bit strange and..., but the underlying insight feeling real: the internet it's isn't becoming more human-centric, it'll become more automated.
What hitting the nail head for we is the idea that sats and platforms such as SN are early experiments in filtering signal from noise. That’s something that should be thought about deeply as we build social tools and online communities.
How do you read "bots want"? Is it efficiency? information? patterns?
Thanks still @k00b for bringing this