pull down to refresh

How do you fit a 250kB dictionary in 64kB of RAM and still perform fast lookups? For reference, even with modern compression techniques like gzip -9, you can't compress this file below 85kB.
In the 1970s, Douglas McIlroy faced this exact challenge while implementing the spell checker for Unix at AT&T. The constraints of the PDP-11 computer meant the entire dictionary needed to fit in just 64kB of RAM. A seemingly impossible task.
Instead of relying on generic compression techniques, he took advantage of the properties of the data and developed a compression algorithm that came within 0.03 bits of the theoretical limit of possible compression. To this day, it remains unbeaten.
The story of Unix spell is more than just historical curiosity. It's a masterclass in engineering under constraints: how to analyze a problem from first principles, leverage mathematical insights, and design elegant solutions that work within strict resource limits.
TL;DR If you're short on time, here's the key engineering story:
I'll let you check the article. Even the tl;dr requires a good understanding of computer science concepts. If you do, this is a pretty cool example of how software solutions had to be much more creative back in the day due to hardware limitations. Now we just use machine learning to avoid writing much logic...