pull down to refresh

In 2021, well before ChatGPT emerged, Alexander's collaborator predicted most of the progress we've seen in AI over the last few years. Now they're trying to predict the next five years in a similar fashion.
The summary: we think that 2025 and 2026 will see gradually improving AI agents. In 2027, coding agents will finally be good enough to substantially boost AI R&D itself, causing an intelligence explosion that plows through the human level sometime in mid-2027 and reaches superintelligence by early 2028. The US government wakes up in early 2027, potentially after seeing the potential for AI to be a decisive strategic advantage in cyberwarfare, and starts pulling AI companies into its orbit - not fully nationalizing them, but pushing them into more of a defense-contractor-like relationship. China wakes up around the same time, steals the weights of the leading American AI, and maintains near-parity. There is an arms race which motivates both countries to cut corners on safety and pursue full automation over public objections; this goes blindingly fast and most of the economy is automated by ~2029. If AI is misaligned, it could move against humans as early as 2030 (ie after it’s automated enough of the economy to survive without us). If it gets aligned successfully, then by default power concentrates in a double-digit number of tech oligarchs and US executive branch members; this group is too divided to be crushingly dictatorial, but its reign could still fairly be described as technofeudalism. Humanity starts colonizing space at the very end of the 2020s / early 2030s.
Looks like in any case power continues concentrating in the hands of a few (be them oligarchs or bots) and the pleb carries on as per usual. The big, consequential political and economic decisions may as well already be programmed by computers, considering how little regard they show the commoner.
I'm starting to miss the lizard-people overlords narratives.
reply