The pace of artificial intelligence (AI) development has surpassed even the most optimistic predictions.
ChatGPT’s meteoric rise is more than a technological feat, it is a financial phenomenon. Recent reports suggest its developer OpenAI’s annual revenue run-rate has jumped to more than $2bn (by February this year) from $1.3bn a mere eight months ago.
The AI arena has become fiercely competitive. ChatGPT now contends with Google’s Gemini and Anthropic’s Claude, forming a triumvirate of leading-edge AI models. Meanwhile, Meta and others are in close pursuit with a diverse array of models optimised for openness, cost, speed and transparency.
We expect Nvidia’s annual revenues will surge to approximately $100bn in 2025, potentially exceeding $250bn by 2030
The scale of investment pouring into AI is staggering, with tech giants, startups and governments vying for supremacy. Nvidia’s data centre division, which supplies the crucial graphics processing unit (GPU) chips, exemplifies this trend. We expect its annual revenues will surge to approximately $100bn in 2025, potentially exceeding $250bn by 2030.
Financial markets have priced in significant AI-driven growth. Investors have largely welcomed big tech’s AI-related capital expenditures, anticipating future profitable revenue streams.
Humans 2.0?
In the near term, AI is likely to complement, rather than replace humans. In the modern world, most human work consists of multiplicative tasks, meaning if just one task is done badly, the entire job suffers.
Combined with a principle known as Moravec’s Paradox, the counterintuitive notion that tasks humans find easy often prove challenging for AI, and vice versa, suggests AI could follow the well-worn path of new technologies that fail to translate into the expected productivity benefits.
Whether the scaling law holds for GPT-5 and its peers is one of the most important unknowns today
However, AI differs from traditional tech cycles in one crucial aspect. Progress has not stemmed from a single, dramatic breakthrough, but from the deceptively simple scaling law (as models grow larger, they become more capable).
This relationship between scale and intelligence has held consistent over the last decade, suggesting if compute power used in training frontier models increases by an order of magnitude, we can reasonably expect a corresponding leap in model intelligence. Whether the scaling law holds for GPT-5 and its peers is one of the most important unknowns today.
AI supercycle
The potential for AI to maintain its dramatic improvement rate underpins the long-term optimism reflected in the valuations of Nvidia and other AI-related companies. Even if the current generation of AI is on course to experience a typical hype cycle, the release of vastly more capable models can reduce this timeline, reigniting interest and investment.
If this scenario unfolds over the coming years, the impact on the global economy could be truly transformative. In this context, many long-term AI beneficiaries may still be undervalued today.
ChatGPT’s meteoric rise is more than a technological feat, it is a financial phenomenon
While the optimistic scenario is plausible, it’s not guaranteed, and two significant challenges could derail the AI boom.
Firstly, the scaling law could break, if ‘Moore’s Law for AI’ falters after the next generation of models, expectations for future progress may reset significantly lower. Secondly, while funding for next-generation models is not yet prohibitive, physical limitations are emerging in areas like datacentre capacity and electricity supply.
The field remains nascent, with ample opportunities for advancement in algorithm design, hardware efficiency and product development. Shortages of physical inputs, such as chips or electricity, will be overcome in time, particularly due to the economic incentives.
However, they may moderate expectations for the pace of future progress, potentially shifting corporate attitudes from fear of missing out to a more calculated focus on optimising AI investments, and valuations of ‘AI winners’ would likely reset lower.
Navigating the revolution
The progress and potential of AI are matched by an equally breathtaking level of investment from semiconductor and cloud computing giants driving expectations to new heights. Unsurprisingly, beneficiaries of the AI theme form a material proportion of today’s portfolios and can broadly be split into three categories: enablers, platforms and adopters.
AI enablers are the ‘picks and shovels’ companies such as Nvidia and TSMC. They reap the benefits of increased spending on AI tools and hardware. As spending grows and physical world constraints become more salient, this category is broadening to include companies from other sectors, such as electric utilities.
If compute power used in training frontier models increases by an order of magnitude, we can reasonably expect a corresponding leap in model intelligence
AI platforms refers to tech leaders, such as Microsoft, Alphabet, Amazon and Meta, that use their existing scale to ensure they are well positioned for whatever threats and opportunities this technology brings.
AI adopters are companies that generate material, utilising the tangible benefits from AI today. While this is currently in short supply, as workers and companies navigate the ‘jagged frontier’ of this technology, this will steadily improve. Companies with large, proprietary datasets such as CME or Mastercard are key examples.
We expect AI winners will likely continue to form a material proportion of equity portfolios and remain vigilant for signs the scaling law breaks down or that physical world constraints are proving an insurmountable hurdle.
Colm Harvey is portfolio manager at Sarasin & Partners