pull down to refresh

#AI’s infinite memory could endanger how we think, grow, and imagine. And we can do something about it.

How we design to forget

If the danger of infinite memory lies in its ability to trap us in our past, then the antidote must be rooted in intentional forgetting — systems that forget wisely, adaptively, and in ways aligned with human growth. But building such systems requires action across levels — from the people who use them to those who design and develop them.

For users: reclaim agency over your digital self

Just as we now expect to “manage cookies” on websites, toggling consent checkboxes or adjusting ad settings, we may soon expect to manage our digital selves within LLM memory interfaces. But where cookies govern how our data is collected and used by entities, memory in conversational AI turns that data inward. Personal data is not just pipelines for targeted ads; they’re conversational mirrors, actively shaping how we think, remember, and express who we are. The stakes are higher.

For UX designers: design for revision, not just retention

Reclaiming memory is a personal act. But shaping how memory behaves in AI products is a design decision. Infinite memory isn’t just a technical upgrade; it’s a cognitive interface. And UX designers are now curating the mental architecture of how people evolve, or get stuck.

For AI developers: engineer forgetting as a feature

To truly support transformation, UX needs infrastructure. The design must be backed by technical memory systems that are fluid, flexible, and capable of letting go. And that responsibility falls to developers: not just to build tools for remembering, but to engineer forgetting as a core function.

To grow, we must choose to forget

So the pieces are all here: the architectural tools, the memory systems, the design patterns. We’ve shown that it’s technically possible for AI to forget. But the question isn’t just whether we can. It’s whether we will.