This is an excellent rebuttal to this optimistic fast company promotional piece.
This substitution is framed as helpful. It is not. A system that acts on your behalf before you act is not honoring your will but preempting it. It removes friction not only from the user experience but also from the process of deliberation itself. The user’s intent is transformed from a sovereign expression into a probabilistic guess, something that can be acted on, recombined, and monetized without consent, so long as it feels approximately correct.
Perhaps the most dangerous fallacy is the suggestion that these systems become more aligned with the individual simply because they feel more responsive. But responsiveness is not representation. Suggesting a reply to your message is not the same as understanding your intention. Auto-generating a personalized shopping experience is not the same as respecting your autonomy. These features give the appearance of alignment while removing the need to involve the human. You don’t decide. You react. And your reaction becomes the next input for further inference.
This is the architecture of inferred intention. It does not wait for consent or solicit clarity. It treats behavioral data as a proxy, prediction as authorization, and automation as alignment. It is not malicious in appearance but profoundly disempowering in design.
We must follow the incentives, not the language, to understand why the false intention economy is accelerating. Despite claims of user empowerment, the current system is not driven by human needs. It is driven by the needs of the platforms, systems, and institutions that benefit most from replacing human will with model-ready abstraction and behavior. The more predictable we become, the more efficiently we can be monetized. The more our choices are simulated, the easier it becomes to route us through pre-designed outcomes.