Research in Action: Hype or reality? How investors should approach the AI boom
In this episode, Portfolio Manager Denny Fish and Research Analyst Shaon Baqui explain why artificial intelligence (AI) is driving big returns for some tech stocks, and what investors should consider as the technology continues to evolve.
32 minute listen
- Enthusiasm for AI has led to a significant rally in technology stocks in 2023, leading some investors to wonder if this could be a repeat of what happened at the advent of the Internet in the late 1990s.
- But while valuations for some companies have risen, they have not come anywhere close to the same heights as the dot-com bubble.
- More importantly, the potential for earnings growth looks real – and is already materializing in some areas of the tech sector and market overall.
Carolyn Bigda: From Janus Henderson Investors, this is Research in Action. A podcast series that gives investors a behind-the-scenes look at the research and analysis used to shape our understanding of markets and inform investment decisions.
It’s been less than a year since ChatGPT, the large language-processing tool powered by artificial intelligence, made its world debut. But already this advanced form of AI is shaking things up in a big way. Portfolio Manager, Denny Fish, who heads Janus Henderson’s Technology Sector team, describes it this way.
Denny Fish: It’s become consumable, and that is what’s most important.
Bigda: Some technology stocks are already getting a big boost from this leap in AI, says Tech Research Analyst, Shaon Baqui.
Shaon Baqui: Yes, it’s been a monster year for a lot of these quote/unquote AI beneficiaries.
Bigda: But has the market moved too far too fast, or are we only starting to unlock AI’s potential value for companies – and investors?
I’m Carolyn Bigda.
Matt Peron: And I’m Matt Peron, Director of Research.
Bigda: That’s today on Research in Action.
Denny, Shaon, welcome to the podcast.
Fish: Thank you. Glad to be here.
Baqui: Thanks for having me.
Bigda: So, ChatGPT launched at the end of November 2022, and there has been a lot of excitement about what it can do, but, also – and perhaps more importantly – what it can potentially do in years to come. What is all the excitement about, Denny? What is AI capable of doing now that it wasn’t able to do before?
Fish: It’s a great question. And, importantly, if we think about the evolution of ChatGPT, for example, which was really the defining moment when ChatGPT 3 was released in November of 2022, that was a buildup of five or six years. There was a ChatGPT 1, there was a ChatGPT 2, but it was really ChatGPT 3 when it was launched. And the reason that it was launched – this is important – because we finally got to the point where we had access to the world’s Internet of data, all the data on the internet, and we actually had large language models capable of processing the number of parameters that you would need to actually train on all of that data, and then access to an infinite amount of compute and GPUs [graphics processing units] and things like that that allows you to actually perform the compute that was necessary.
So, all those conditions came together. And why that’s important is we now have the first iterations. You can see ChatGPT, it’s a great virtual assistant. It augments search. For consumers, it’s straightforward. It’s a very pleasant experience. You’ll get some wrong answers from time to time, but that’s the evolution of technology. So, that’s an obvious one.
Software development with [GitHub] Copilot – Copilot was pretty much introduced at the same time as ChatGPT. Now, you can go through the software development process, debugging process, do a lot of the rote work that software developers would’ve historically done, and increase productivity by multiples compared to what you could do prior to ChatGPT.
Furthermore, people that don’t even know how to code things like SQL, which is a database programming language, now Copilot does it for you. Now you can go, and you can create queries and ask questions of data that you just otherwise weren’t able to do as a business analyst, potentially. The productivity enhancement is significant.
Other areas which are moving really fast is in the marketing and automation and image generation space. We’re seeing a lot there where you just think about the idea of how you can iterate creatively. That’s really important. Being able to do dynamic A/B testing of websites and ad units is a really, really interesting use case.
We’re starting to see the ability for things like ChatGPT-enabled services to effectively write a 10-K or 10-Q. All of this very basic foundational work that you might do in legal or accounting or other professions, we’re now seeing through generative AI that you can really create this stuff on the fly effectively and accurately.
And so, we’re seeing a lot of those use cases. Clearly, the market’s really enthusiastic about it. And it’s one of these things, these big generational shifts in technology, like what happened with cloud [computing] from the middle 2000s all the way through to today, was you overestimate what’s going to happen in the next year and you completely underestimate what’s going to happen over the next decade. And we’re a little bit in that cycle right now, but, boy, it’s moving fast.
Bigda: So, if you had to sum it up, is it that generative AI has increased its, I guess, processing powerband also just its use application?
Fish: Yes. It’s become consumable, and that is what’s most important. We’ve been doing AI as a society for 30 years. We just haven’t been able to put it in a consumable format where the average person can grab ChatGPT or some sort of application that’s developed through the API – an application programming interface – into ChatGPT to do really unique things. And we’re there now. And so, perfect example is, you can communicate with ChatGPT via voice text. And so, it’s just easy, and that’s what’s starting to proliferate now.
Bigda: It sounds like, Matt, that even you and I could maybe use AI at this point.
Peron: It’s funny you say that. We could. It is getting, to Denny’s point, consumable. And so, an hour before we came down here, I was just offered a service, a model that will take a conference call from a company and summarize the key points of the conference call and the key debates, so save you some time there.
Bigda: Shaon, what are some other real-world applications that you’re seeing?
Baqui: Denny walked through a lot of these applications, but I think at a high level, when most people think of generative AI, the most obvious use case is around areas like search, image recognition and generation, and video creation. I think a lot of folks have tinkered with the ChatGPT to curate a trip to Cabo [San Lucas] or maybe create a funny picture of a cat on a motorcycle. But aside from that, I think the real-world use cases are much broader than that and can have a real profound impact on productivity.
So, I’m talking about areas like office productivity, right, so using generative AI to create emails, Word documents and PowerPoint presentations. Denny touched on the code generation and the amount of manhours that could save for a programmer. Digital marketing: If you’re a start-up, imagine not having a marketing budget but being able to use AI to produce blogs, social media posts, sales e-mails, ads, etc.
And then one area that’s not talked about a lot but it’s really exciting, is in the healthcare industry. So, imagine being in an emerging nation or a rural area where you’re hundreds of miles away from your nearest doctor, but being able to use AI to self-diagnose issues by issuing this prompt of your symptoms, that could have a very transformational impact.
And then the last one I want to talk about is drug discovery. Historically, this has been a very long, convoluted process that takes a lot of man-hours. But being able to use these large AI models to analyze pathogens and biomolecules at a rate that’s significantly faster than what a human can do can just really accelerate that drug deployment process and, hopefully, at the end of the day, help us find these cures much faster than we have historically.
Peron: One thing we do know is the picks and shovels are going to benefit from this. So, Shaon, that’s your area, of course. Can you lay out for us the landscape both in terms of demand, as well as the industry structure? How many players are really going to be able to have chips that can deliver the kind of processing power that AI is going to need?
Baqui: Sure. It’s a great question. I think one of the keys to really unleashing the power of generative AI, as Denny alluded to, is this breakthrough we’ve seen in accelerated computing. I’m talking about things like GPUs, or graphics processing units, or application-specific processors, or ASICs.
So, just stepping back a little bit, creating these massive trillion-parameter models, it requires just a tremendous amount of compute resources. So, this is basically doing, in a matter of months, what would’ve taken years on a traditional CPU. So, thanks to this innovation in parallel processing for the GPU, we’ve really been able to speed that training process up.
And what’s interesting is that we’re not really stopping there. The size and complexity of these large language models are scaling a lot faster than the actual compute power than some of these chips. I’ll give you an example: OpenAI’s latest iteration of GPT, it’s called GPT 4. That’s supposedly based on more than 1 trillion parameters, and that’s up from roughly 175 billion parameters on GPT 3. And that happened in over a course of six months. So, in six months, we went roughly sixfold increase in model size.
Now, Moore’s law[i] will tell you, however, that the amount of compute performance doubles every two years in a chip. So, the amount of computing structure out there needs to grow substantially to support a) just the absolute number of large language model beings deployed, but also this rising complexity of those models, as well.
But I think what’s interesting, Matt, is that, from a semiconductor perspective, the insatiable demand for compute does create some challenges. The first is around supply. Right now, there’s this massive shortage of GPUs out there, given the big rush to deploy these large language models. Everyone across the supply chain, from foundries to packaging houses to testing companies and even memory companies, are rushing to meet this inflection that really nobody saw coming six months ago.
And the other side is really on the cost side. Simply put, the world needs more GPUs, and these are expensive and hard to find. What we’ve seen is these large hyperscalers is actually go out and design their own custom processors to really bring down costs over time. And in other cases, we’ve seen hyperscalers look for a second source.
So, I guess, long story short, clearly, some challenges around supply and cost in the near term. But I think over time, the market for AI should be up and to the right for the next several years, and I would expect the amount of innovation across chip design and manufacturing really to step up meaningfully to meet that demand.
Bigda: Denny, AI is enabling a lot of other mega tech themes that you talk about, like cloud computing. And so, can you talk maybe about how AI might already be impacting these other mega themes in the sector?
Fish: Yes, absolutely. If we think about a couple of the mega-themes of tech over the last ten or 15 years, cloud has clearly been one of them. Unless you have access to data, unless you have access to compute – and in an infinite way – this just doesn’t work. And so, cloud is the foundational underpinning for AI to be broadly applicable. So, that’s really important.
And then, if we think about how compute has disaggregated to the edges and the whole concept of the Internet of Things, that’s what’s going to allow us to actually do things like Inference at the Edge. And Shaon was talking about training, the importance of GPUs, but once you have that model, then you have to deploy it actively. It’s kind of like development versus production, is what we would have called for software development many years ago. Training and inference is that same concept.
They’re all tightly linked, and why this is important for tech – multiple reasons. One, it’s another driver for picks and axes, as Shaon had mentioned in terms of core infrastructure to support it. But, more importantly now, it infuses a new level of value that companies can actually provide to their customers.
And so, a really good example of that is software-as-a-service companies and, particularly, vertical software-as-a-service companies that specialize in construction or healthcare or real estate or whatever it may be. They have really, really unique datasets. And those are proprietary datasets. And so, they can either take those proprietary datasets, train them in a unique way, and then be able to provide really unique solutions and insights to their customers that they can actually monetize. Or they compare that data with publicly available data to create unique solutions as well.
So, we’re already seeing multiple companies start to come out with incremental pricing for AI-based services that are meaningful to the business. It’s just not fluff, and it’s just not talk. It’s like, here’s what we’re delivering, and here’s what we’re going to charge. And, wow, that is very incremental to what the software companies have charged historically.
If we rewind the clock 18 months, there were significant changes in the ecosystem around data privacy that really impacted the ability for Internet properties to actually measure and target digital advertising. And now with AI, and infusing that into their processes, they’re finally able to target and increase the return on ad spend. And so that creates a flywheel of investment that continues to build upon itself.
So, it’s both being able to sell new products and services to your existing customer base, it’s the enablement of those underlying foundational technologies, and then it’s the ability for companies to actually leverage AI to improve the services that they’re actually providing to consumers as well.
The other thing that doesn’t get talked about as much is there are a lot of companies out there, a great example, a lot of companies that spend a ton of money on customer service and provide a horrific experience. And so, it’s what I call the worst-in-class, best-case opportunity set for AI because there’s so much being spent, it’s being spent really inefficiently, and with AI you could really improve the productivity and the income statement, potentially.
Peron: But Denny, you’ve always taken the long view in your work and appropriately so, and you said earlier that people can underestimate the long-term impacts. And then you said they overestimate the short-term impacts. Right, so, are you cautioning us to say, yes, there’ll be these new applications, but don’t expect your airline reservation experience to improve overnight? It’ll be more evolutionary than that. Is that what you’re saying? And there’s risk then, potentially, for a slower roll than people might expect?
Fish: Yes, completely. And so, there’s definitely going to be stuff that’s iterative in nature and it’s evolutionary and not revolutionary. And then, there are areas that are benefiting from AI that just have severe supply/demand imbalances, like GPUs, for example. And that is real and that is near term, and it’s probably the most significant supply/demand imbalance we’ve ever seen in tech.
It’s a good question because I think what the market’s doing is the same thing that we’re doing right now, and that is, if you take a whiteboard, and you draw a line down the whiteboard and you just ask yourself, each company, are you on the left side of the ledger, meaning you’re on the wrong side of time as it relates to AI, or are you on the right side of the ledger and you’re on the right side of time? So, you’d better figure that out because the multiples of those stocks are going to get impacted just by that perception. And then there are a bunch of companies that sit on the line; can’t quite figure out if they’re on the left side of the ledger, right side of the ledger, and things are moving.
And so, that’s what the market’s figuring out now. That’s important to get that structural long-term narrative right. For many companies, the monetization is probably not going to happen until ‘24, ‘25, ‘26, but, nonetheless, the market is excited about what that could potentially bring. That’s what I call the art and the science of what’s going on.
One thing that we’ve been doing, and I’ve been asking Shaon to do on certain companies, is to do a real serious range of outcomes of what’s even plausible or possible, depending on where the valuations of those names are. And if we’re in the realm of reason, then, obviously, we own the stocks. And if things have gone so far out of the realm of reason that it’s not even practical any longer to support the valuation, then we can’t own the stocks.
You have to have your valuation overlay. You’ve got to have your art of what the market’s going to do in the near term because we’re so early into AI. And investors have just come off of 20 years of blissfulness of cloud computing and what that meant and the returns that you could make, and they don’t want to miss out on the next 10 or 15 years.
Peron: So, if I read that back to you, there’s two dimensions. I like your whiteboard with the who’s on what side, and market’s going to make mistakes in that dimension. That’s one opportunity, if you will, for people like you and your team. And then the second dimension is the valuation because some things that will look super expensive now, five years from now, say, oh, my God, you could’ve bought that at that price. That’ll look really cheap, considering what it’s done; and vice versa, where they were overhyped, if you will, and that they didn’t. We’ve two ways to play this, in some sense, as an active manager. Is that…?
Fish: That’s right, yes. That’s exactly right.
Bigda: Shaon, what’s your viewpoint, especially since you’re covering the makers of the GPUs? What’s the dynamic there?
Baqui: I think the answer is, it really depends. In some cases, like AI semiconductors and cloud services, like Denny alluded to, the returns are absolutely justified. We’re already seeing AI having a profound impact on revenues and on orders.
We talked about GPUs. This is a market that’s more than doubling to more than $30 billion, this accelerated GPU opportunity. We thought it would take another three to four years, frankly, to achieve that type of level for GPUs. Now, when we look forward another three or four years, the market could potentially triple to over $100 billion. So, there’s definitely an inflection happening on the GPU side. And, again, there’s a path to $100 billion. So, very, very big numbers; numbers we didn’t think were going to be possible even six months ago. So, we’re seeing that play out in real time.
Cloud computing is another area. We’ve already seen one of the larger hyperscalers quantify AI as having a 100 basis-point[ii] impact or a positive tailwind to their cloud services growth rate just in three months after launching these services. So, definitely an opportunity over time to be orders of magnitude larger as they monetize these new services, such as coding and productivity tools.
Yes, it’s been a monster year for a lot of these, quote/unquote, AI beneficiaries. But I think in many cases, like the ones I alluded to, the returns are absolutely justified given we’re still bottom of the first inning or even top of the first inning of a pretty transformational opportunity that can be much, much larger in magnitude over time.
Fish: And something I would add on there is, it is not optional to not play in this market.
Bigda: What do you mean by that?
Fish: That means companies, governments – and I’ll go into government in a moment – they don’t have the option to decide, I’m not going to participate in AI. Or you will be left behind, and the train is leaving the station. I think that’s really important. To Shaon’s point, the capex is justified because you have to do it to compete right now. And that’s across many, many different sectors.
And then I just wanted to highlight government because we haven’t talked about it. We’ve talked about enterprises and consumers. This is probably the single biggest national security discussion that’s going on globally right now, is the impact of AI and how governments can control access to key enabling technologies and stay ahead of each other. And there’s no better example than the U.S. and China right now in the battle for AI supremacy. And that is really important because that’s just one other driver that layers on top of all of this. And so, it’s a pretty amazing time in terms of what we’re seeing.
Bigda: As this pie grows bigger and bigger, does it create opportunities for what might be non-traditional AI players in the tech space to sort of get in the game, so to speak?
Baqui: Yes, absolutely. I think when people think of AI, they naturally associate GPUs and the big computing companies as the real beneficiaries. But we see AI as a rising tide across things like memory, networking, optics, even semi-cap equipment or manufacturing equipment to produce these large, very complex chips. We really see AI as this rising tide for the entire semiconductor industry.
Over time, at the end of the day, you’re going to need more memory to process those large datasets, more high-speed networking optics to move that data, and more hard disk drives and solid-state memory to store it. All that feeds into the need for more capital intensity to produce these large semiconductors. So, there’s definitely been a nice pull-along effect from some less obvious players that historically have not been thought of as cloud computing or AI-first companies.
Fish: Let me add something else onto that, too. Shaon hit on a lot of the important components that are likely to benefit. If you actually think about it, so the GPU right now is probably the most important piece of technology for enabling AI. The GPU does not exist without design software. The GPU cannot get produced without semiconductor capital equipment. Semiconductor capital equipment doesn’t really do anything on its own unless it goes to a foundry. So, when you think about this supply chain that enables it, the interdependencies are really, really tight. And so, while you have very, very pronounced benefits in, say, GPUs and accelerated computing, that clearly flows downstream over time in terms of all of the enabling technologies that actually support the development and the production of that GPU.
Bigda: So, Matt, when you’re thinking about the market as a whole and allocating capital, what are some of the tough questions that you’re asking this Tech team when it comes to the case for investing in AI right now and separating maybe a little bit of hype from what the long-term value potential is?
Peron: Yes, all the questions that we’ve been kicking around are certainly topical: How long is this buildout cycle, the goldrush, if you will? What’s the duration of that until we have enough of these models built out that the demand starts to curve down? That’s unknowable, but I think the team has navigated those cycles in the past very well and identified the long-term secular trends, so we’re lucky to have the team there.
And then, to your point, that does inform our market valuations and things like that because you get a sense of the growth rate in the various sectors, and you can do the buildup that way. So, it’s been informative just to talk to the team, understand both the capex cycle and then the software cycle that’s going to follow on, and the revenue that’ll start to accrue there.
An important piece of what – and Denny was talking about this – it’s really important to keep up in your industry. If you’re the CEO of an operating company, you’re going to need this, to Denny’s point. It’s going to be an imperative. And when you think about that, what that could potentially do to margins for the market as a whole, that’s something we’re thinking through. Management [teams] over the past cycle have been really good about managing corporate margins, protecting margins, expanding margins. It’s been a terrific margin cycle, if you will, a long margin cycle. Could this give us another 10 years of a margin cycle here? The cloud did that before, and this could be the next iteration of that. So, it does have broader market implications, as you say.
Bigda: Let’s ask the million-dollar question, though, which is, when we’ve seen these past productivity booms that have been driven by tech, they’ve sometimes ended up in an equity bubble that has popped. Do we think that there’s potential for that this time around?
Fish: There’s always potential for a bubble when there’s this amount of significant change that is upon us. And I’d be surprised if we didn’t enter a bubble at some point. But contrary to the last really big – so, I mean, valuations got out of whack in tech in ‘20 and ’21 – but if we want to really go back to the last big tech bubble, it was the advent of the commercial Internet in the late ‘90s and into the early 2000s. And that burst massively. But there were a couple of fundamental problems there. One was there were a lot of companies that should’ve never been public companies, that did not have business models, did not have unit economics, and they just, frankly, disappeared.
The other fundamental problem was that the big-cap tech companies traded to 100x earnings, ridiculous levels. We are still at a point right now where the largest technology companies trade anywhere between 16x earnings and 35x earnings. You could argue whether 35x is elevated. You could compare these tech companies to consumer staples companies, and they actually look pretty similar. But we’re still not anywhere close to what you would define as bubble territory outside of the random speculative stuff that’s happening under the market. But, broadly speaking, the market so far seems to be acting rationally and hasn’t gotten way out of control. But, clearly, there’s the potential for that.
Bigda: Shaon, would you agree?
Baqui: Yes, I think just being able to separate hype from reality in many of these cases helps us out a ton. Who are the real AI beneficiaries from the fake ones? And being able to quantify what the incremental opportunities for AI are for a lot of these businesses can really help us frame whether or not a business can grow into some of these elevated valuations that they are trading at today.
And then maybe the third piece is – and Denny alluded to this earlier – we need to identify those AI losers, the ones that are on the wrong side of the ledger. So, sort of a longwinded answer is saying that, yes, we need to be more nuanced as we evaluate the opportunity set here. But I think, at the end of the day, I think as a team we really do believe that AI is much more reality than hype, just given the transformational impact it’s going to have on how we work, play, and communicate.
Peron: I thought the million-dollar question you were going to ask is, will AI kill all the humans? But, okay, get that one next time.
Bigda: Maybe the million-dollar question should be, will we be sitting in these chairs a year from now?
Peron: Will Directors of Research exist?
Peron: That’s fair enough.
Fish: We’ll [let AI] just create this thing.
Bigda: Denny, Shaon, thanks very much for your time today. We really enjoyed getting your insights on what’s next for artificial intelligence.
Next time, we’ll be joined by Luyi Guo, a research analyst on our Healthcare team, to talk about the excitement surrounding a new generation of obesity therapies and what that means for the biopharma companies developing these medicines. We hope you’ll join.
Until then, I’m Carolyn Bigda.
Peron: I’m Matt Peron.
Bigda: You’ve been listening to Research in Action.
1 Moore’s law is the principle that the speed and capability of computers can be expected to double ever two years, as a result of increases in the number of transistors a microchip can contain.
2 Basis point (bps) is a common unit of measure for interest rates and other percentages in finance. One basis point is equal to 1/100th of 1% or 0.01%.