pull down to refresh

Technologist and founder Balaji Srinivasan to explore how the metaphors we use to describe AI—whether as god, swarm, tool, or oracle—reveal as much about us as they do about the technology itself.
Balaji, best known for his work in crypto and network states, also brings a deep background in machine learning. Together, the trio unpacks the evolution of AI discourse, from monotheistic visions of a singular AGI to polytheistic interpretations shaped by culture and context. They debate the practical and philosophical: the current limits of AI, why prompts function like high-dimensional programs, and what it really takes to “close the loop” in AI reasoning.
This is a systems-level conversation on belief, control, infrastructure, and the architectures that might govern future societies.
More insights from Balaji newsletter here https://balajis.com/p/ai-is-polytheistic-not-monotheistic
121 sats \ 16 replies \ @optimism 21h
I like the overall take, and the reasoning about targeted disruption of proprietary models through open-weight model releases, regardless of who does it 1, concurs with what I've observed and how I extrapolate that phenomenon too, as laid out in the linked tweet from the article:
China thinks it has an opportunity to hit US tech companies, boost its prestige, help its internal economy, and take the margins out of AI software globally (at least at the model level).
I just wonder how long it will last.
It's easy to now celebrate that FOSS is currently the weapon of choice in the global LLM-race and that there is evidence that the CCP strategy is to align with open weights, but I remind myself daily that this is weaponization only and not the embrace of open source principles. We often see that once market share is determined to be sufficient (or a competitor sufficiently hurt), the tools used for capture are abandoned or weakened. 2

Nit:
Polytheistic/Monotheistic feels like a bit of a misnomer, especially since the rest (of the article) focuses on utility and not that AI is in any way a higher being (because it isn't a being.) In the context of AI, poly kind of disqualifies theos, not only because there are multiple models, but also because each model can be ran multiple, independent times.
I think that if we change this into polylithic (many models running in many, decentralized instances) versus monolithic (a single grand Skynet-like "AI" that runs as a single instance, even if it's distributed), it makes more sense - but I'm not really sold on that terminology either.

Footnotes

  1. Chinese companies have done it, but Meta has done this too and at least announced (#1060587) it will continue doing it in some form.
  2. You can see this play out in more mature software sub-industries like for example mobile, where Google is now "sabotaging" AOSP (#1005566).
reply
In this race, there is no victory. They will have the weapon, and you will have one that cannot fight them or defend you from them. The best way to win is not to use it and to encourage people not to use it by showing them how ridiculous it is. Because if they don't see how ridiculous it is, they will pay with their own freedom. This is already happening.
reply
is just tech. People were worried about fire, trains, electricity, bitcoin... and now ai. It will be widely adopted and seemly used at the moment we will feel comfortable doing so, in the same way most of us today bring a phone in the pocket, or use a car instead of a horse.
reply
Unlike all those you mentioned, you give AI all your precise information that serves people who don't want you to be free. You give away your way of thinking, your habits, your data, your worries and weaknesses, some even give away the ways in which you keep your money like bitcoin and properties. This is ammunition for dictators and corporations who want to guide slaves into a way of thinking and remove from society those they think are dangerous.
reply
Haven't computers and a big tech be doing the same for decades now? Sucking information and drive viral ads to people aiming to spend days sliding infinite scrolls?
I simply see ai as the exponential expression of this corporate evil behavior. Yes we could have started earlier, removing tv from our homes, ad blockers in our computers. Most of the people ignore all this and now are trap using tech like heroin addicted to a feels-needed plug to society.
reply
So what's the way out then? Letting it pass?
reply
as if is just a trend? yes sure, there will be in the future better tech we can not even imagine today... let it pass. Using it is optional anyway.
reply
I currently just treat it as an advanced database engine that indexed the internet, with an extrapolation function. I'm kind of unhappy with the pre-applied tuning but at the same time unwilling to invest time and resources into re-training research right now, so I just test things.
The use-cases I use it for in "production", defensive summarization and speech-to-text, have not been bleeding edge for a long time. It's just nice that I can run that efficiently on my own hardware, without depending on SAAS/IAAS, now.
reply
You can do it yourself and you'll gain more knowledge by doing it. Maybe even ask a human friend for a review.
I've used AI for this and I've seen how silly it was to waste time on something I could do myself and still get out of my comfort zone. It puts you in a low-level dependency zone, modifying something that should be authentic out of a need to appear better to those who will read it, which you are not, robotic and shallow.
Yes, you'll be fine. Especially considering that you're above average or very close to non-standard knowledge, a kind of knowledge that makes you free and immune to all kinds of bullshit that comes to steal your freedom.
Not using it is extremely feasible since you haven't needed it so far. As mentioned in the article itself, AI is reactive and not active, it depends on commands even when you put it to do repeated tasks it's just following your “from to”. There's no point arming the enemy with something so trivial when we already have software that does it and it's not LLM.
reply
21 sats \ 2 replies \ @optimism 9h
There's no point arming the enemy with something so trivial
Arming "the enemy" how though?
reply
100 sats \ 1 reply \ @perscrutador 9h
Data. It's not because it's running locally that your information is completely protected, the model is processing and being trained, what guarantee do you have that it won't share insights with the developer, or that it will do so in a future moment of carelessness during an update or through an extraction from an agent who has an interest in data like this?
Most importantly, making yourself dependent on an AI makes you open to concepts where the AI is controlling many aspects of your life.
deleted by author
AI is empirically decentralizing rather than centralizing. Right now, AI is arguably having a decentralizing effect, because there are (a) so many AI companies and (b) there is so much more a small team can do with the right tooling, and (c) because so many high quality open source models are coming.
There is definitely a false sense of decentralization, since any idiot can run an llm on their own network. They forget that the data is there, being processed, stored, and improving this shit. Any attacker, such as a company that cares about data and all its sacred secrets, will go away with your digital friend. The worst thing is not that, but having codependency on something that makes you lazy and dumb with the false promise of being productive.
AI is probabilistic while crypto is deterministic.
The entire text is well written, and I agree with much of what has been said. I am glad that there are experts who actually speak the truth, demystifying what many others want to elevate to Olympian heights.
I will follow what else this gentleman has to say.
reply