In the future I'm imagining when the entire 10 million token context of what your company does and why in a system prompt / memory system.
If you run a decent sized org and your entire institution is 10 million tokens then you're very, very lean and you're definitely not operating under any government rules.
My personal bespoke codebase alone, excluding any third party code, for only my LLM work environment (meaning for personal experimental stuff, not my work env) is as of writing 2,879,989 tokens (yes, I just counted them). I started this repository a little over a year ago and I don't even touch this daily. Also this has zero compliance in it because it's my personal stuff. And in 30 seconds Claude Code resets and it's gonna increase with another 10k tokens over an hour or so from queued up jobs.
10M is nothing. That's 1000 hours of a single Claude agent. And this is just code. Not even counting logs, repo issues & comments, docs, test results... and so on.
The fun part is:
If you run a decent sized org and your entire institution is 10 million tokens then you're very, very lean and you're definitely not operating under any government rules.
My personal bespoke codebase alone, excluding any third party code, for only my LLM work environment (meaning for personal experimental stuff, not my work env) is as of writing 2,879,989 tokens (yes, I just counted them). I started this repository a little over a year ago and I don't even touch this daily. Also this has zero compliance in it because it's my personal stuff. And in 30 seconds Claude Code resets and it's gonna increase with another 10k tokens over an hour or so from queued up jobs.
10M is nothing. That's 1000 hours of a single Claude agent. And this is just code. Not even counting logs, repo issues & comments, docs, test results... and so on.