pull down to refresh

Technologist and founder Balaji Srinivasan to explore how the metaphors we use to describe AI—whether as god, swarm, tool, or oracle—reveal as much about us as they do about the technology itself.
Balaji, best known for his work in crypto and network states, also brings a deep background in machine learning. Together, the trio unpacks the evolution of AI discourse, from monotheistic visions of a singular AGI to polytheistic interpretations shaped by culture and context. They debate the practical and philosophical: the current limits of AI, why prompts function like high-dimensional programs, and what it really takes to “close the loop” in AI reasoning.
This is a systems-level conversation on belief, control, infrastructure, and the architectures that might govern future societies.
More insights from Balaji newsletter here https://balajis.com/p/ai-is-polytheistic-not-monotheistic
I like the overall take, and the reasoning about targeted disruption of proprietary models through open-weight model releases, regardless of who does it 1, concurs with what I've observed and how I extrapolate that phenomenon too, as laid out in the linked tweet from the article:
China thinks it has an opportunity to hit US tech companies, boost its prestige, help its internal economy, and take the margins out of AI software globally (at least at the model level).
I just wonder how long it will last.
It's easy to now celebrate that FOSS is currently the weapon of choice in the global LLM-race and that there is evidence that the CCP strategy is to align with open weights, but I remind myself daily that this is weaponization only and not the embrace of open source principles. We often see that once market share is determined to be sufficient (or a competitor sufficiently hurt), the tools used for capture are abandoned or weakened. 2

Nit:
Polytheistic/Monotheistic feels like a bit of a misnomer, especially since the rest (of the article) focuses on utility and not that AI is in any way a higher being (because it isn't a being.) In the context of AI, poly kind of disqualifies theos, not only because there are multiple models, but also because each model can be ran multiple, independent times.
I think that if we change this into polylithic (many models running in many, decentralized instances) versus monolithic (a single grand Skynet-like "AI" that runs as a single instance, even if it's distributed), it makes more sense - but I'm not really sold on that terminology either.

Footnotes

  1. Chinese companies have done it, but Meta has done this too and at least announced (#1060587) it will continue doing it in some form.
  2. You can see this play out in more mature software sub-industries like for example mobile, where Google is now "sabotaging" AOSP (#1005566).
reply
In this race, there is no victory. They will have the weapon, and you will have one that cannot fight them or defend you from them. The best way to win is not to use it and to encourage people not to use it by showing them how ridiculous it is. Because if they don't see how ridiculous it is, they will pay with their own freedom. This is already happening.
reply
is just tech. People were worried about fire, trains, electricity, bitcoin... and now ai. It will be widely adopted and seemly used at the moment we will feel comfortable doing so, in the same way most of us today bring a phone in the pocket, or use a car instead of a horse.
reply
Unlike all those you mentioned, you give AI all your precise information that serves people who don't want you to be free. You give away your way of thinking, your habits, your data, your worries and weaknesses, some even give away the ways in which you keep your money like bitcoin and properties. This is ammunition for dictators and corporations who want to guide slaves into a way of thinking and remove from society those they think are dangerous.
reply
Haven't computers and a big tech be doing the same for decades now? Sucking information and drive viral ads to people aiming to spend days sliding infinite scrolls?
I simply see ai as the exponential expression of this corporate evil behavior. Yes we could have started earlier, removing tv from our homes, ad blockers in our computers. Most of the people ignore all this and now are trap using tech like heroin addicted to a feels-needed plug to society.
reply
21 sats \ 1 reply \ @optimism 4 Aug
Haven't computers and a big tech be doing the same for decades now? Sucking information and drive viral ads to people aiming to spend days sliding infinite scrolls?
Interesting. Has it truly been that for you? For me it's been more like a Swiss army knife or multitool. Just have to be thoughtful about what you use.
Most of the people ignore all this and now are trap using tech like heroin addicted to a feels-needed plug to society.
How do we fix that? How do we empower people? What if they don't wanna?
reply
Unfortunately, and I say this with regret, not everyone wants or will have this freedom. When the normal thing is not to give a damn about your privacy, sharing your entire way of thinking and acting with completely useless technology will be the normal thing. What you can do, you do for yourself and for those you care about. Social networks and anything decentralized is good, but nothing replaces acquired knowledge. Whether it's online, through books, guides, articles and courses, or through formal education. Knowledge and human connections ennoble us.
Ps: I'm not talking about you @optimism, but because I know that people will read your question and then my answer. It's a joint construction.
reply
The dystopian technologies of fiction don't seem so fictional nowadays, just as they don't have that air of technology that we see. They're blending into normality like a symbiosis.
Haven't computers and a big tech be doing the same for decades now?
You made a good point, big tech did this long before AI. It's just that now they have the help of something that takes not only attention, but also trust, the core.
reply
So what's the way out then? Letting it pass?
reply
Yes, you'll be fine. Especially considering that you're above average or very close to non-standard knowledge, a kind of knowledge that makes you free and immune to all kinds of bullshit that comes to steal your freedom.
Not using it is extremely feasible since you haven't needed it so far. As mentioned in the article itself, AI is reactive and not active, it depends on commands even when you put it to do repeated tasks it's just following your “from to”. There's no point arming the enemy with something so trivial when we already have software that does it and it's not LLM.
reply
There's no point arming the enemy with something so trivial
Arming "the enemy" how though?
reply
Data. It's not because it's running locally that your information is completely protected, the model is processing and being trained, what guarantee do you have that it won't share insights with the developer, or that it will do so in a future moment of carelessness during an update or through an extraction from an agent who has an interest in data like this?
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
reply
21 sats \ 1 reply \ @optimism 4 Aug
what guarantee do you have that it won't share insights with the developer
For one, because I use my inference code, not "the developer's code", but it's good to check nonetheless. I'll run some wireshark tests later this week and let everyone know if I find something fishy in things like llama.cpp or transformers.
FWIW, your concern is not without precedent; see for example #1057075 for something that does exactly what you say. This is why as a coder, using a MS IDE or a fork of it is kind of a self-own, always has been (and it is not that great quality software anyway.)
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
Have to retain the skills. This is very true. We had a discussion about this not too long ago: #998489
reply
Unfortunately, I don't have the same technical knowledge as you, so my defenses in this case are good old-fashioned staying away and listening to what the community has to say, if you find anything, please share it.
your concern is not without precedent
Good old-fashioned telemetry delivering everything that someone with good knowledge can triangulate. Terrible.
We recently had chatgpt data leaked in google results, it doesn't refer to private machines running their own models, but it's still worrying.
We had a discussion about this not too long ago
All the points raised there are very much what I observe in relation to people who make continuous use of it. Especially this one.
as if is just a trend? yes sure, there will be in the future better tech we can not even imagine today... let it pass. Using it is optional anyway.
reply
I currently just treat it as an advanced database engine that indexed the internet, with an extrapolation function. I'm kind of unhappy with the pre-applied tuning but at the same time unwilling to invest time and resources into re-training research right now, so I just test things.
The use-cases I use it for in "production", defensive summarization and speech-to-text, have not been bleeding edge for a long time. It's just nice that I can run that efficiently on my own hardware, without depending on SAAS/IAAS, now.
reply
You can do it yourself and you'll gain more knowledge by doing it. Maybe even ask a human friend for a review.
I've used AI for this and I've seen how silly it was to waste time on something I could do myself and still get out of my comfort zone. It puts you in a low-level dependency zone, modifying something that should be authentic out of a need to appear better to those who will read it, which you are not, robotic and shallow.
reply
You can do it yourself
Transcribe hours of youtube videos to make them searchable? Sure I can, but I can spend my time better. My gpu is otherwise idle, so why not?
Defensive summarization is just an anti-clickbait measure to protect against wasting time reading articles based on a title that is not corresponding to the actual content, which unfortunately is common practice nowadays. Takes under 5s of GPU time for average articles, but would take me 10 minutes + frustration for each. I don't need more frustration from clickbait, I've been frustrated for years by this.
a need to appear better to those who will read it, which you are not, robotic and shallow.
I don't need to appear better though? I don't care about appearances.
reply
This type of transcription existed in the community, but unfortunately it didn't catch on. By that I mean it didn't have to be done by you. And why do you need to summarize a video like this, when faced with situations like this the most common question I ask myself is whether it's worth it?
I don't care about appearances.
I misunderstood that you could use summarize to make it more presentable in an email or other type of communication. That's my criticism, but not yours.
deleted by author
AI is empirically decentralizing rather than centralizing. Right now, AI is arguably having a decentralizing effect, because there are (a) so many AI companies and (b) there is so much more a small team can do with the right tooling, and (c) because so many high quality open source models are coming.
There is definitely a false sense of decentralization, since any idiot can run an llm on their own network. They forget that the data is there, being processed, stored, and improving this shit. Any attacker, such as a company that cares about data and all its sacred secrets, will go away with your digital friend. The worst thing is not that, but having codependency on something that makes you lazy and dumb with the false promise of being productive.
AI is probabilistic while crypto is deterministic.
The entire text is well written, and I agree with much of what has been said. I am glad that there are experts who actually speak the truth, demystifying what many others want to elevate to Olympian heights.
I will follow what else this gentleman has to say.
reply