Our Culture

What you believe and what you value as an organization matters. Discover the DNA of our firm.

Our People

Get to know our teams and the stories of select staff members who share why they choose to work at Sands Capital.

News & Events

Read about some of the latest events, partnerships, and business highlights from Sands Capital.

Careers

At Sands Capital, we strive to hire exceptional talent who will live our values and support our efforts to deliver on our mission.

Sands Capital portfolio managers reflect on five powerful lessons they have learned while investing through past downturns.

As artificial intelligence (AI) becomes a part of our everyday lives, making sure it’s transparent and unbiased has become a high priority for businesses, governments, and ordinary people. In this episode of What Matters Most, Scott Frederick, a Sands Capital managing partner, explains how software startup Credo AI is helping to keep AI models responsible. Join us as we look at how Credo AI aims to lead the way in this vital new field.

At Sands Capital we encourage our investment team to think in decades not quarters. Director of Research Michael Raab, CFA discusses how culture can support the visionary research needed to find businesses creating the future.

WEG is setting new standards for sustainability and innovation by focusing on energy efficiency and renewable solutions that deliver cutting-edge technologies to align with the needs of a rapidly changing world.

Philosophy & Approach

Our philosophy is rooted in the belief that, over time, stock prices will reflect the earnings power and growth of the underlying businesses.

Our latest annual report offers a comprehensive view of how we add value through active stewardship.

Public Equity

Our newest strategy takes an unconstrained approach to seeking the best growth businesses outside of the U.S.

Venture and Growth Equity

Sands Capital invests in innovative businesses across all stages of the growth spectrum

Exploring AI’s Ever-Evolving Value Chain

Portfolio Manager
Sr. Research Analyst

Sr. Director, Client Relations

on
June 26, 2025
Play Video

Artificial intelligence (AI) is reshaping the foundation of digital infrastructure across industries and geographies. Sr. Director, Client Relations Kevin G. Murphy, CFA, and Portfolio Manager and Sr. Research Analyst Daniel Pilling examine how the AI value chain is evolving—from model training and inference to geopolitical dependencies and compute constraints. Drawing on both quantitative insights and structural trends, they explore the enduring demand for AI compute, the intensifying global race for semiconductor leadership, and the emerging bottlenecks that could redefine the pace and direction of technological progress.

Transcript

Kevin Murphy: Let’s start with a very high-level view here and talk about the global AI landscape. So, Dan, help us establish a frame of reference. Tracking the AI landscape to me is like using an old-fashioned paper roadmap, which you may or may not be familiar with depending on your age. But before GPS, we used to pull a map out of the glove box, figure out where we are, and figure out where we’re going.

Trying to understand the lay of the land for AI, in my mind, is like trying to read one of those roadmaps while it’s being written and expanded—not just forward, but up, down, left, right. A lot of changes happening. And, more importantly, trying to figure out where you are on that map. You know, the “you are here” dot.

I think of that “where you are” as living in the quantum realm. You really can’t pinpoint it because it’s moving so quickly. So give us an idea of where we are. And I’ll start with a question about the state of investment right now. Last year, when we had a similar conversation, we talked about all the money and bandwidth going into training these models to help give us better answers.

With the eventual switch to inference, where are we on that paradigm? Have we switched to more money and resources being spent on inference? Is it still in the training realm?

Daniel Pilling: I think it’s being spent on both, actually. And we can talk about what’s happening on the algorithmic side to maybe substantiate that point. On the algorithmic side in terms of training—we’re still training larger models today. A good data point might be Meta Platform’s model from last year, called Llama 3, which had about 400 billion parameters.

The next one, called Llama 4 Behemoth, is going to have 2 trillion parameters, right? So it’s much bigger. We’re still sort of building these big, big models. We spoke to a few neuroscientists a few months ago, and according to them at least, the human brain has close to 100 trillion. There’s still a little bit to go, to even just get to our sort of side. But I think the bottom line is the models are still getting bigger, which means you have to use more computation for training on that side.

The second big thing that happened—and frankly, that’s a DeepSeek ramification, the Chinese model DeepSeek—is that we can do something called reinforcement learning without humans. Reinforcement learning is sort of this idea that whenever you use ChatGPT, for example, sometimes it gives you two options. You click on the option that you prefer, and you train the model. So the human is training the model.

Going forward, the human is not needed anymore, and the model can learn by itself, effectively by trial and error. And that’s a big deal for training, right? Because that means the longer you train on reinforcement learning, the better the model gets, which means again that expands the demand for training. So not only do you have bigger models, but you’re also training that bigger model for a longer time using reinforcement learning. And now the third point I make is more inference-related. We had sort of a big breakthrough on the 12th of September, I believe, 2024, when the OpenAI reasoning model came out. The idea there is to say, let’s give the model more time to think before it provides an output.

What really happens is the model runs a few thousand times, let’s say, and picks the best answer in the end. And it’s sort of the equivalent to us humans, right? If you, I, or whoever maybe get given a little bit more time to think, hopefully, the answer is better. It’s as simple as that. But the ramification is enormous, right? Because that means any company in the world can say, “I’m going to invest more computational dollars in this answer. I’m going to let the AI think for a minute, two minutes, or maybe a month, to figure out something really important.”

And that means also inference has become much, much, much more compute-intensive. And so if you look at it, basically both continue to be very important. Training has some interesting elements of growth, but so does inference. And, ultimately, inference will probably become bigger than training, simply because there’s probably more people and things calling these models.

Confronting Physical Limits to Exponential Growth

Kevin Murphy: The way you describe it, it sounds like an exponential growth algorithm for sure. What are the physical limitations right now to that kind of exponential growth on both sides of training and inference?

Daniel Pilling: The tendency is to talk about electricity and the potential of running out of electricity. And it’s interesting. Actually, if you look at China, they’re heavily investing into new solar energy, wind energy, nuclear energy. And sort of the same thing is happening in the U.S., just on a much smaller scale. The U.S. is probably more driven by the big hyperscalers trying to force investments.

But, bottom line, it seems that the biggest bottleneck is likely electricity and how much we have in that. I would argue the second biggest bottleneck might be, over time, that if we have self-driving cars, we have humanoid robots doing things, let’s say, we have an AI iPhone—all of that actually requires a lot of silicon.

And, as you may remember from COVID, it’s a notoriously long-dated sort of supply chain. We may find ourselves in a situation where, at some point, maybe we just run out of wafer capacity to do all these great things. Now, obviously, that would be a nice problem to have, right? And it depends how the scaling goes of the various things that we’re talking about. But that could be a second bottleneck sooner or later.

Maybe, if you even just think about ASML’s EUV [extreme ultraviolet], the lead times there can be anywhere from 18 months to longer, so that takes a long time to add more capacity. Right?

Competing Globally While Navigating Supply Constraints

Kevin Murphy: You mentioned earlier DeepSeek in China. Where is China in the development path right now? And why does it matter? Why do we need to pay attention to how far along they are?

Daniel Pilling: Yes, I think China has done amazingly well with DeepSeek. And I think the real reason behind that is like in any other country in the world, there’s a lot of smart people in China. And this is an algorithmic problem. You experiment with a problem, which means you can throw mathematicians, physicians, physics majors, etc., at the problem, and they will come up with something pretty amazing.

Now, the problem for China is that they do not have the local semiconductor manufacturing capacity. And especially within that—today they can manufacture at seven nanometer, which is sort of five-to-six-year-old technology. But they cannot really go below that, because, for example, they lack ASML’s EUV lithography equipment, which basically means if they want to go lower, it’s going to be very, very inefficient in terms of yields.

And that then means that if you’re China today, you cannot purchase more NVIDIA chips, because you’re export restricted. I don’t know whether that changes or not, but unlikely. We’ll see. And then, secondly, you cannot manufacture your own, because you’re stuck at seven nanometer.

And if you had seven nanometer chips, maybe to put it in comparison—so NVIDIA’s Blackwell chip, the latest one that came out last year, is about three to four times better on training versus the previous one. They tell us they’re going to bring out a new chip every one-and-a-half years.

If we look in five years from now, that would be about a 30-times improvement in the capabilities of NVIDIA’s chip, if they can keep on doing three to four times the way they’re doing that. It’s better software, better systems, but also just going from two nanometer to whatever is going to be the new node at that time, whereas China can’t do that. You would literally compare something that’s five years old now to something that’s 30 times better. And it just won’t scale. And if you use these old chips, you can’t build these enormous clusters, even if you have all the electricity in the world. The argumentation would be that it’s going to be very difficult for China—not because they don’t have the people to do this, but more just they will not have the chips anytime soon. Yeah. And maybe the final point I’d make—ASML Holding, for example, the EUV—it took 15 to 20 years to develop. And China is nowhere in developing something like this internally at all. So, difficult.

Understanding the Strategic Fragility of Taiwan’s Semiconductor Dominance

Kevin Murphy: Well, staying on the geopolitical theme, then, how does Taiwan Semiconductor play into this? And if China does start to see that as a bottleneck, why wouldn’t they just do what I think everybody thinks they’ll eventually do, which is to take Taiwan Semiconductor away?

Daniel Pilling: Yes. Taiwan Semiconductor is a very big bottleneck, as you say. Today, they are a monopoly on the leading edge, which means that every NVIDIA chip, every iPhone, anything that has anything to do with a leading application that requires a lot of compute goes through Taiwan Semiconductor. Now they did say publicly that about 30 percent of their leading edge is going to be done in the U.S.—mainly in Phoenix and Arizona.

That will help. But on the flip side, only about 10 percent of the R&D [research and development] will be done in the U.S. over time, and 90 percent will stay in Taiwan. Taiwan is really important, and a big, big, big sort of bottleneck for the world effectively. If China were to invade Taiwan, the truth is that they wouldn’t really get chips and/or Taiwan Semiconductor. The truth is, they would get basically big factories with a lot of semiconductor capex equipment.

Which is valuable, right? But you can’t really use it because they lack the spare parts from Europe, Japan, the U.S. And it would probably run out within two or three weeks. It seems like a difficult math to do if you’re China. You’re not getting that much in terms of semis, at least, right?

Disclosures:

The views expressed are the opinion of Sands Capital and are not intended as a forecast, a guarantee of future results, investment recommendations, or an offer to buy or sell any securities.

The views expressed were current as of the date indicated and are subject to change. This material may contain forward-looking statements, which are subject to uncertainty and contingencies outside of Sands Capital’s control. Readers should not place undue reliance upon these forward-looking statements. All investments are subject to market risk, including the possible loss of principal. There is no guarantee that Sands Capital will meet its stated goals. Past performance is not indicative of future results. A company’s fundamentals or earnings growth is no guarantee that its share price will increase.

Unless otherwise noted, the companies identified represent a subset of current holdings in Sands Capital portfolios and were selected on an objective basis to reflect holdings enabling or potentially benefitting from the adoption of generative artificial intelligence.

As of June 12, 2025, Sands Capital strategies hold positions in ASML Holding, Meta Platforms, NVIDIA, and Taiwan Semiconductor. 

Any holdings outside of the portfolio that were mentioned are for illustrative purposes only.

The specific securities identified and described do not represent all of the securities purchased, sold, or recommended for advisory clients. There is no assurance that any securities discussed will remain in the portfolio or that securities sold have not been repurchased. You should not assume that any investment is or will be profitable.

GIPS Reports found here.

Further Disclosures

Related Articles

Investment Strategy
AI at Scale: The Forces Shaping the Next Decade
In this video series, Portfolio Manager and Sr. Research Analyst Daniel Pilling explores how generative AI is reshaping industries. In examining training versus inference, compute economics, infrastructure constraints—from power grids to chip supply, he evaluates valuations and long-term opportunities for businesses able to seize the promise of the rapidly developing technology.
Investment Strategy
Valuations Don’t Reflect Potential of Gen AI
Valuations have surged, driven by AI enthusiasm. Portfolio Manager Daniel Pilling argues that semiconductors and hyperscalers are fairly priced given yet-to-be-realized revenue opportunities in evolving industries, such as robotics and autonomous vehicles.
Investment Strategy
Reimagining Work and Value in the AI Age
Compute growth and synthetic training environments will fuel industrial and domestic robots, unlocking efficiencies and redefining time’s value. Explore how AI adoption and consumer time savings may reshape economics and spending patterns.
Investment Strategy
AI Adoption Becoming a Matter of Survival
AI infrastructure has evolved from optional investment into an existential imperative. Hyperscale compute delivers sub-year paybacks and fortifies resilience against downturns. Portfolio Manager Daniel Pilling argues that organizations underinvesting in AI infrastructure risk falling irretrievably behind in tomorrow’s digital economy.
Investment Strategy
China’s Robotics Renaissance Appears More Than a Cyclical Fad
After recent trips to China, Sands Capital Sr. Research Analyst Massimo Marolo, CFA, finds China’s industrial backbone combined with proactive policies around market entry, R&D incentives and antitrust oversight, create fertile ground for automation to redefine productivity and competitiveness.
on
Print
Perspectives Newsletter
Our latest thinking on innovation investing, long-term capital, and culture.

Something has gone wrong, check that all fields have been filled in correctly. If you have adblock, disable it.

The form was sent successfully