Related Articles
Stay Up To Date
Something has gone wrong, check that all fields have been filled in correctly. If you have adblock, disable it.
The form was sent successfully
Artificial intelligence (AI) is reshaping the foundation of digital infrastructure across industries and geographies. Sr. Director, Client Relations Kevin G. Murphy, CFA, and Portfolio Manager and Sr. Research Analyst Daniel Pilling examine how the AI value chain is evolving—from model training and inference to geopolitical dependencies and compute constraints. Drawing on both quantitative insights and structural trends, they explore the enduring demand for AI compute, the intensifying global race for semiconductor leadership, and the emerging bottlenecks that could redefine the pace and direction of technological progress.
Transcript
Kevin Murphy: Let’s start with a very high-level view here and talk about the global AI landscape. So, Dan, help us establish a frame of reference. Tracking the AI landscape to me is like using an old-fashioned paper roadmap, which you may or may not be familiar with depending on your age. But before GPS, we used to pull a map out of the glove box, figure out where we are, and figure out where we’re going.
Trying to understand the lay of the land for AI, in my mind, is like trying to read one of those roadmaps while it’s being written and expanded—not just forward, but up, down, left, right. A lot of changes happening. And, more importantly, trying to figure out where you are on that map. You know, the “you are here” dot.
I think of that “where you are” as living in the quantum realm. You really can’t pinpoint it because it’s moving so quickly. So give us an idea of where we are. And I’ll start with a question about the state of investment right now. Last year, when we had a similar conversation, we talked about all the money and bandwidth going into training these models to help give us better answers.
With the eventual switch to inference, where are we on that paradigm? Have we switched to more money and resources being spent on inference? Is it still in the training realm?
Daniel Pilling: I think it’s being spent on both, actually. And we can talk about what’s happening on the algorithmic side to maybe substantiate that point. On the algorithmic side in terms of training—we’re still training larger models today. A good data point might be Meta Platform’s model from last year, called Llama 3, which had about 400 billion parameters.
The next one, called Llama 4 Behemoth, is going to have 2 trillion parameters, right? So it’s much bigger. We’re still sort of building these big, big models. We spoke to a few neuroscientists a few months ago, and according to them at least, the human brain has close to 100 trillion. There’s still a little bit to go, to even just get to our sort of side. But I think the bottom line is the models are still getting bigger, which means you have to use more computation for training on that side.
The second big thing that happened—and frankly, that’s a DeepSeek ramification, the Chinese model DeepSeek—is that we can do something called reinforcement learning without humans. Reinforcement learning is sort of this idea that whenever you use ChatGPT, for example, sometimes it gives you two options. You click on the option that you prefer, and you train the model. So the human is training the model.
Going forward, the human is not needed anymore, and the model can learn by itself, effectively by trial and error. And that’s a big deal for training, right? Because that means the longer you train on reinforcement learning, the better the model gets, which means again that expands the demand for training. So not only do you have bigger models, but you’re also training that bigger model for a longer time using reinforcement learning. And now the third point I make is more inference-related. We had sort of a big breakthrough on the 12th of September, I believe, 2024, when the OpenAI reasoning model came out. The idea there is to say, let’s give the model more time to think before it provides an output.
What really happens is the model runs a few thousand times, let’s say, and picks the best answer in the end. And it’s sort of the equivalent to us humans, right? If you, I, or whoever maybe get given a little bit more time to think, hopefully, the answer is better. It’s as simple as that. But the ramification is enormous, right? Because that means any company in the world can say, “I’m going to invest more computational dollars in this answer. I’m going to let the AI think for a minute, two minutes, or maybe a month, to figure out something really important.”
And that means also inference has become much, much, much more compute-intensive. And so if you look at it, basically both continue to be very important. Training has some interesting elements of growth, but so does inference. And, ultimately, inference will probably become bigger than training, simply because there’s probably more people and things calling these models.
Confronting Physical Limits to Exponential Growth
Kevin Murphy: The way you describe it, it sounds like an exponential growth algorithm for sure. What are the physical limitations right now to that kind of exponential growth on both sides of training and inference?
Daniel Pilling: The tendency is to talk about electricity and the potential of running out of electricity. And it’s interesting. Actually, if you look at China, they’re heavily investing into new solar energy, wind energy, nuclear energy. And sort of the same thing is happening in the U.S., just on a much smaller scale. The U.S. is probably more driven by the big hyperscalers trying to force investments.
But, bottom line, it seems that the biggest bottleneck is likely electricity and how much we have in that. I would argue the second biggest bottleneck might be, over time, that if we have self-driving cars, we have humanoid robots doing things, let’s say, we have an AI iPhone—all of that actually requires a lot of silicon.
And, as you may remember from COVID, it’s a notoriously long-dated sort of supply chain. We may find ourselves in a situation where, at some point, maybe we just run out of wafer capacity to do all these great things. Now, obviously, that would be a nice problem to have, right? And it depends how the scaling goes of the various things that we’re talking about. But that could be a second bottleneck sooner or later.
Maybe, if you even just think about ASML’s EUV [extreme ultraviolet], the lead times there can be anywhere from 18 months to longer, so that takes a long time to add more capacity. Right?
Competing Globally While Navigating Supply Constraints
Kevin Murphy: You mentioned earlier DeepSeek in China. Where is China in the development path right now? And why does it matter? Why do we need to pay attention to how far along they are?
Daniel Pilling: Yes, I think China has done amazingly well with DeepSeek. And I think the real reason behind that is like in any other country in the world, there’s a lot of smart people in China. And this is an algorithmic problem. You experiment with a problem, which means you can throw mathematicians, physicians, physics majors, etc., at the problem, and they will come up with something pretty amazing.
Now, the problem for China is that they do not have the local semiconductor manufacturing capacity. And especially within that—today they can manufacture at seven nanometer, which is sort of five-to-six-year-old technology. But they cannot really go below that, because, for example, they lack ASML’s EUV lithography equipment, which basically means if they want to go lower, it’s going to be very, very inefficient in terms of yields.
And that then means that if you’re China today, you cannot purchase more NVIDIA chips, because you’re export restricted. I don’t know whether that changes or not, but unlikely. We’ll see. And then, secondly, you cannot manufacture your own, because you’re stuck at seven nanometer.
And if you had seven nanometer chips, maybe to put it in comparison—so NVIDIA’s Blackwell chip, the latest one that came out last year, is about three to four times better on training versus the previous one. They tell us they’re going to bring out a new chip every one-and-a-half years.
If we look in five years from now, that would be about a 30-times improvement in the capabilities of NVIDIA’s chip, if they can keep on doing three to four times the way they’re doing that. It’s better software, better systems, but also just going from two nanometer to whatever is going to be the new node at that time, whereas China can’t do that. You would literally compare something that’s five years old now to something that’s 30 times better. And it just won’t scale. And if you use these old chips, you can’t build these enormous clusters, even if you have all the electricity in the world. The argumentation would be that it’s going to be very difficult for China—not because they don’t have the people to do this, but more just they will not have the chips anytime soon. Yeah. And maybe the final point I’d make—ASML Holding, for example, the EUV—it took 15 to 20 years to develop. And China is nowhere in developing something like this internally at all. So, difficult.
Understanding the Strategic Fragility of Taiwan’s Semiconductor Dominance
Kevin Murphy: Well, staying on the geopolitical theme, then, how does Taiwan Semiconductor play into this? And if China does start to see that as a bottleneck, why wouldn’t they just do what I think everybody thinks they’ll eventually do, which is to take Taiwan Semiconductor away?
Daniel Pilling: Yes. Taiwan Semiconductor is a very big bottleneck, as you say. Today, they are a monopoly on the leading edge, which means that every NVIDIA chip, every iPhone, anything that has anything to do with a leading application that requires a lot of compute goes through Taiwan Semiconductor. Now they did say publicly that about 30 percent of their leading edge is going to be done in the U.S.—mainly in Phoenix and Arizona.
That will help. But on the flip side, only about 10 percent of the R&D [research and development] will be done in the U.S. over time, and 90 percent will stay in Taiwan. Taiwan is really important, and a big, big, big sort of bottleneck for the world effectively. If China were to invade Taiwan, the truth is that they wouldn’t really get chips and/or Taiwan Semiconductor. The truth is, they would get basically big factories with a lot of semiconductor capex equipment.
Which is valuable, right? But you can’t really use it because they lack the spare parts from Europe, Japan, the U.S. And it would probably run out within two or three weeks. It seems like a difficult math to do if you’re China. You’re not getting that much in terms of semis, at least, right?
Disclosures:
The views expressed are the opinion of Sands Capital and are not intended as a forecast, a guarantee of future results, investment recommendations, or an offer to buy or sell any securities.
The views expressed were current as of the date indicated and are subject to change. This material may contain forward-looking statements, which are subject to uncertainty and contingencies outside of Sands Capital’s control. Readers should not place undue reliance upon these forward-looking statements. All investments are subject to market risk, including the possible loss of principal. There is no guarantee that Sands Capital will meet its stated goals. Past performance is not indicative of future results. A company’s fundamentals or earnings growth is no guarantee that its share price will increase.
Unless otherwise noted, the companies identified represent a subset of current holdings in Sands Capital portfolios and were selected on an objective basis to reflect holdings enabling or potentially benefitting from the adoption of generative artificial intelligence.
As of June 12, 2025, Sands Capital strategies hold positions in ASML Holding, Meta Platforms, NVIDIA, and Taiwan Semiconductor.
Any holdings outside of the portfolio that were mentioned are for illustrative purposes only.
The specific securities identified and described do not represent all of the securities purchased, sold, or recommended for advisory clients. There is no assurance that any securities discussed will remain in the portfolio or that securities sold have not been repurchased. You should not assume that any investment is or will be profitable.
References to “we,” “us,” “our,” and “Sands Capital” refer collectively to Sands Capital Management, LLC, which provides investment advisory services with respect to Sands Capital’s public market investment strategies, and Sands Capital Alternatives, LLC, which provides investment advisory services with respect to Sands Capital’s private market investment strategies, including the Global Innovation strategy, which is available only to qualified investors. As the context requires, the term “Sands Capital” may refer to such entities individually or collectively. As of October 1, 2021, the firm was redefined to be the combination of Sands Capital Management and Sands Capital Alternatives. The two investment advisers are combined to be one firm and are doing business as Sands Capital. Sands Capital operates as a distinct business organization, retains discretion over the assets between the two registered investment advisers, and has autonomy over the total investment decision-making process.
Information contained herein may be based on, or derived from, information provided by third parties. The accuracy of such information has not been independently verified and cannot be guaranteed. The information in this document speaks as of the date of this document or such earlier date as set out herein or as the context may require and may be subject to updating, completion, revision, and amendment. There will be no obligation to update any of the information or correct any inaccuracies contained herein.