This post about AI by Nic Carter is pretty interesting. Copying the text here for ease of readability. His original post on Twitter has a bunch of links and a few charts as well.
this NVDA rally has gone from "incredibly impressive" to actually scaring me a bit. not for AI safety reasons. I'll explain.I'm lucky enough to be an early investor in @CoreWeave , one of the most incredible startup stories I've ever seen. one of the most interesting things about proximity to CW is simply having a pulse on which AI use cases are taking off at any given time.back in 2019 the team told me "this thing called AI dungeon is hammering our servers". it was a text based fantasy adventure game built on GPT2. of course I quickly "jailbroke" the game and was able to get it to return arbitrary queries rather than simply following the game's intended design. even on a fairly primitive LLM, the experience of an interactive and sophisticated text model absolutely blew my socks off. at that moment I felt something had changed forever. many people later had this same aha moment when chat GPT came out.in 2020 I read @gwern's scaling hypothesis [1], one of the most prescient and important blog posts of all time, in which he pointed out that simply throwing more data and compute at these models can plausibly create AGI, or something close to it.in 2022 stable diffusion came out, and that blew my socks off again. I spent countless hours learning prompting. I realized that AI was truly multimodal. the early image models weren't impressive by today's standards but the direction of travel was obvious – image gen would be perfected, and then text to video, which we are on the cusp of now. at that point I felt that image gen was too important to ignore, and seriously explored the idea of incubating an image gen startup with @leveredvlad (in the end he went a different direction and started Offdeal, an AI powered smb search product)at this point I had developed the conviction that I was dreadfully underexposed to AI, even despite my CW exposure. I was determined to change this. in 2023 I was lucky enough to meet @v_maini and I wrote my largest ever LP check into @MythosVentures [2], a VC firm focusing on new applications unlocked by AI. I also dramatically shifted my angel activity towards AI and wrote my biggest ever angel check into @AviSchiffmann's http://Friend.com (AI wearable startup). the reason I leaned so heavily into AI was because of a few beliefs I had developed:
AI would dramatically empower capital relative to labor. AI simply means that companies simply need fewer employees while maintaining the same level of productivity. I notice this already in my own practice – I can now do programming or data science tasks myself, whereas I might have previously needed a software engineer or a data scientist. I noticed that with certain data analysis tasks, I was 100-1000x more efficient using AI tools. I noticed the same efficiency and cost savings gain with image generation. this is true in a variety of modalities. this is profoundly disruptive for society, and massively accelerates an ongoing trend of automation and the devaluing of human capital, particularly in professional services (more on this later). the point is, I felt that the balance of power was shifting away from people selling their labor to companies, and in favor of shareholders and firms. My action items: none, because as a VC I am already on the capital side, to put it crassly investors overlooking AI would miss out on the biggest theme of the decade. the foundational models are not, in my opinion, the way to play this though. if you're an early-stage investor, you benefit significantly because AI drives down the number of staff required to run a startup. solopreneurs are now a thing. a relatively smart individual with no programming experience can now build things on their own. My action items: lean heavily into LPing into AI VC funds, doing AI angel deals out of my PA, and looking at hybrid AI crypto deals at CIV AI will permanently put an end to the "post truth" era. this is the subject for another post, but clearly our prior epistemic standards no longer apply. the cost of creating arbitrary image or video content is effectively 0, so unsigned content will no longer by considered reliable (once people have learned to lately mistrust online content). to be considered reliable in the future, content will have to be signed, attested to, and timestamped (likely on a blockchain). so our post truth era will end, not because content is now no longer reliable by default, but because all content will be assumed fake, unless attested to. My action items: invest like startups like Tab/Friend (AI wearables that can create an attested “digital alibi”) and startups like @witnessco_ (on chain attestation tools) the AI boom would rescue the US from its demographic malaise and its significant debt overhang. after WWII, the US was in a similar situation with regards to indebtedness, but we found our way out through a combination of high and variable inflation, a baby boom, and a productivity boom. I current believe that in the US, AI will add 2-4 points to GDP growth for a decade, and help us grow out of our debt crisis we are facing (even absent the favorable demographics). I believe the stronger growth in the US relative to the rest of the developed world is at least partially a function of the AI boom we are seeing. luckily for the US, and unluckily for the rest of the world, the epicenter of AI development is here, and that causes me to have a new level of optimism about US fiscal prospects that I simply didn't have before. I think that AI is at least as significant economically as the invention of nuclear power or the internet, and probably more. however, it will have a profoundly disparate impact, and the benefits will accrue a to far fewer, which is part of my concern. My action items: reduce my internal probability that the US faces a significant debt crisis, at least relative to the rest of the developed world. Retain the US as the nexus of my professional activities.The scale of the AI boom is so significant today that it is running up against new bottlenecks. In 2021-23 the constraint for AI was the availability of hardware, specifically a100s and then h100s. today, it's the availability of tier 4 datacenters (AI datacenters have meaningfully different infra reqs from ordinary ones, because they require more sophisticated networking, have higher power density and need more cooling). these take a long time to build and that's the bottleneck today. (CoreWeave's Brannin McBee talks about this on Odd Lots [3]) if you listen to Zuckerberg on the @dwarkesh_sp podcast [4], he repeatedly says that the new constraint on AI compute growth is simply power. the level of investment the hyperscalers are talking about putting into AI compute will in my opinion at least rival investments in telecoms (~$500b in the 5 years following 1996). (quick sidenote: even if the hyperscalers are overinvesting on AI clouds and datacenters, this isn’t wasteful in the same way that the railroad boom was, since AI clouds can incorporate different models based on whatever ends up being best, so duplication isn’t a problem. If they overspend that simply creates a consumer surplus whereby inference is cheaper than it otherwise would have been.)Amazon, Alphabet, Meta, and Microsoft announced that they will collectively spend $200b on AI infra this year alone. AI growth is so aggressive that we are now running up against the literal availability of GW-scale power as the new constraint.so why is the NVIDIA rally making me nervous? at $2.8T market cap and up 135% YTD, NVIDIA is posting growth numbers that are almost inconceivable for a company this large. The rally is so significant it appears to be sucking capital out of the rest of the SP500 and other big tech names.partially, the rally is driven by investor desire to chase proven growth in a relatively weak economic environment, powered by the belief that NVIDIA chips, software, and networking are protected by a fundamental moat (which I generally agree with), and so they are the equivalent of a monopolist in a commodity that everyone needs to buy.but I'm also listening to what the market is telling me, which is that NVIDIA is the most important company in the world today. The growth numbers NVIDIA is posting at least partially seem to be justifying the rally.I think the market has realized that AI will be embedded into every application, AI wearables will be ubiquitous, and eventually we'll stop thinking of AI as a distinct category, the same way we don't think of "internet connected devices" any more, because everything is networked. We don’t have “internet investors”, we just have investors and every startup relies on the internet. AI will simply be ubiquitous, and this means that the compute requirements per capita will increase by many orders of magnitude over the coming decade. virtually everyone will use AI virtually all of the time, because it will simply be incorporated into every application.as I said before, I think AI dramatically empowers capital relative to labor. this is why, as a capital allocator, I significantly pivoted my focus to firms that would benefit from AI on a first and second order basis. but this has a very uneven impact on society. today, in my view, human capital has already been devalued to ~0 in fields like translation, transcription, and summarization. Full self driving works today, potentially obsoleting huge pools of labor like taxi drivers, rideshare, and eventually trucking.in other fields, like programming, web dev, and graphic design, AI tools dramatically enhance human productivity, and reduce the need for junior programmers doing relatively mundane tasks. In medicine, AI diagnostics are already superior (especially in imaging), although the highly regulated nature of medicine means that these improvements will be resisted for some time. In white collar professions like law and accounting, AI will be able to replace a lot of the grunt work done by junior staff. While AI-delivered medical or legal advice seems primitive right now, these fields mostly boil down to ingesting patient data and creating recommendations, or querying large datasets of case law and giving advice. There’s no reason AI can’t reach parity with the state of the art here. (of course, these white collar professions rely on tastemakers at the very top to interpret the data they are given, and that won’t go away. But most of the process to get there can and will be automated). There seems to be no place to hide.Many draw analogies from the industrial revolution and point out that it didn’t put people out of work, it just created new jobs as civilization was able to harness energy more effectively, urbanize, and specialize. But this isn’t quite true. The industrial revolution did make huge labor pools irrelevant – animals like horses that suddenly had no role in agriculture. (The number of agricultural horses in Europe declined by about 90% in the 100 years following 1850). Today, taxi drivers, translators, and so on are the “horses”. But you can’t literally “put these people out to pasture” like the horses were. The social contract in developed countries stipulates that they be taken care of even if their human capital has been devalued. This, combined with a demographic transition and a shrinking prime workforce relative to total population deeply concerns me. The other disanalogy with the industrial revolution is that in that case, we harnessed new sources of energy to make humans more productive. In this case, we have created superhuman level intelligence (currently specialized in a few domains and within a few years, general) that far surpasses human capability. Of course, entrepreneurs and creatives will be able to harness these tools to make themselves orders of magnitude more productive. For this reason I am positive on GDP growth and the startup sector, as the number of employees needed to build a startup continues to decline. but it's undeniable that it simply makes a lot of human skills irrelevant.It’s my current hypothesis that AI will continue to drive a worsening division between capital and labor, to the benefit of capital. In the recent inflationary environment as asset prices came down (temporarily) and hourly wages were revalued upwards, labor actually did well relative to capital (this is common in inflationary episodes, contrary to the common talking point found on here). I think AI will reverse this short term trend as we see productivity grow, and as senior programmers/consultants/lawyers etc are able to use AI tools to do the job that would have normally required 5-10 analysts or more junior staff. In the medium term, entire professions which employ millions of people will simply cease to exist. Society can’t just tolerate a massive furloughing of a huge percentage of its workforce, so I expect we will see reprisals against capital (which we are already seeing to a degree).These reprisals could take the following forms:
- Highly regulating AI in an effort to slow its disruptive growth (this currently is underway under the guise of “AI safety”)
- Raising capital gains taxes and eliminating loopholes like QSBS and the carried interest tax loophole
- Increasing government spending on entitlements and direct transfers to this newly unemployable sector (think the COVID era transfers made permanent). This has the side effect of increasing inflation which creates nominal equity gains which are then taxed, creating a wealth transfer from investors to the state
- Directly involving the state in AI development via intrusive laws like California’s SB 1047, covered by Piratewires [5], which would effectively ban open source AI
- If things play out the way I expect, then we may also see an empowering of socialist political movements in developed nations, as new coalitions of AI-affected individuals are formed. these may be more powerful than prior socialist movements as we will see white collar salaried workers included in the set of disenfranchised individuals
As an investor, the AI opportunity is obviously colossal and on a par with the invention of the internet or railroads in terms of disruption and value creation. But I think it’s likely to be “too successful” in terms of disrupting society. I believe that the effect of AI on the workforce will lead to an empowering of socialist, anti-capital dynamics in the west. So while the move is to allocate aggressively, you have to consider the reprisals to come.