Myron Scholes on Time: Understanding and building trust
In the second episode of the series, Myron explores how data mining, clustering, and reverse engineering of models could create pitfalls for investment managers. Myron, in conversation with Phil Maymin, highlights the importance of building trust in the investment management industry by presenting valid, trustworthy information and cautions investors to not rely on past performance alone.
31 minute listen
- Strategies backtested through data mining and without a foundation of theory are likely to underperform expectations and erode trust.
- Education and performance that adheres to expectations can build trust.
- Understanding your own constraints and the constraints of others helps generate new solutions.
Beta measures the volatility of a security or portfolio relative to an index. Less than one means lower volatility than the index; more than one means greater volatility.
Dividend Yield is the weighted average dividend yield of the securities in the portfolio (including cash). The number is not intended to demonstrate income earned or distributions made by the portfolio.
Standard Deviation measures historical volatility. Higher standard deviation implies greater volatility.
Derivatives can be more volatile and sensitive to economic or market changes than other investments, which could result in losses exceeding the original investment and magnified by leverage.
Fixed income securities are subject to interest rate, inflation, credit and default risk. As interest rates rise, bond prices usually fall, and vice versa.
Foreign securities, including sovereign debt, are subject to currency fluctuations, political and economic uncertainty and increased volatility and lower liquidity, all of which are magnified in emerging markets.
High-yield bonds, or “junk” bonds, involve a greater risk of default and price volatility.
Options (calls and puts) involve risks. Option trading can be speculative in nature and carries a substantial risk of loss.
Understanding and building trust
Phil Maymin: Welcome back to the Myron Scholes podcast, On Time. Chief Investment Strategist at Janus Henderson, Professor of Finance at the Stanford Graduate School of Business, and Nobel Laureate in Economic Sciences. Among many other accomplishments and responsibilities that would take too long to list, Myron shares his unique insights with us here. These podcast episodes are aimed at sophisticated investors and those who wish to be sophisticated investors, and are intended to be thought-provoking and perhaps even controversial. We hope you leave each episode with more questions than you started.
Today’s episode is about truth tellers and cheaters and constraints and trust and time.
Myron Scholes: Thank you very much. In investment management, Phil, there are three important problems in implementing strategies which might enhance or reduce trust. These are problems associated with, one, data mining, two, clustering, and three is reverse engineering of models. Many managers use historical data to test their new models. Obviously, a long track record of good performance along the lines of promised returns build trust. So the more you can demonstrate through actual experience that you have a good track record and have good performance, as you said, people will trust you more and more over time as you generate these returns.
Myriad managers, however, provide historical back-tests of strategies and show significant results that, unfortunately, will not replicate going forward. They look at previous data and found a result and they justify this result after the fact. And so, basically, justifying it after the fact means that it is very unlikely to reproduce itself going forward. They look at previous data and found a result that they like and then they say, “We can produce tremendous performance for you.” So this is one reason why cheating in this way causes there to be confusion among clients. Because if you actually give your historical results truthfully from a basic economic theory or science that others don’t – they just data mine the past, they look at historically and look at what’s going on – then basically you end up in a situation where there’s confusion because the results can’t be trusted.
So clients might like outcomes that require combinations of securities and derivatives and managers and/or investment banks work the historical data to provide the clusters that provide the risk characteristics desired, coupled always with demonstrated superior returns. So the investors who I talk to and address are always confronted by cross-sectional results – not only time series, but marrying together cross-sectional results that tend to produce great clusters.
For example, in the 2007/2008 period of time, investment banks generated portfolios of mortgage contracts, subprime mortgage contracts, sold them to investors who needed to trust their advisors to buy these particular subprime mortgage contracts that promise high rates of return. That product was built under the assumption that defaults of mortgages, in subprime mortgages in Stockton, California were not related to the mortgages in Baton Rouge, Louisiana or in Miami, Florida – three independent clusters. Idiosyncratically, you might have defaults in Stockton, but you wouldn’t have the same magnitude of idiosyncratic results in Baton Rouge, Louisiana.
Unfortunately, when the crisis unfolded in 2007/2008, mortgages in the subprime category defaulted where? Stockton, Baton Rouge, and Miami. They all defaulted together. So as a result, the performance was horrible even though the back-test never showed a shock as large as what happened in 2007/2008. The promised returns can’t be met in many cases because of the data mining that occurred and the clustering theory that occurred because there was no theory or economics or underlying justification for the results.
How come I was getting this extra return subprime mortgages? Why? Because it looked as though there was random defaults, not a contagion or results of defaults across a broad class of mortgages together. If we were to pick the best winners of the past, they obviously might not reproduce themselves going forward. In basketball, my hero Steph Curry, will have a son; maybe a great basketball player, but unlikely not to be as good as Steph was, or even Dell Curry, his father, would not produce a son as good as Steph Curry became. So, you know, we don’t know what’s going to happen by just data mining the past.
Every model has an error to it, or it would not be a model. All investment is model-based. If it were an exact science, then there would be no profits to be made. If everywhere known was certainty and time was not an issue, basically, there would be no models because there’d be no uncertainty. But we have … every model has uncertainty; it has an error or else it wouldn’t be a model. Once investors learn that a particular manager is using a secret model that produces superior results, others will try to discover that model and copy it. They’ll do it better or try to do it better. They will enhance the model and the return in the first model will be less than anticipated.
When I was a high-school student and learned about the angle of reflection and refraction, and then I figured out, I can go to the pool room, and if I use that angle, I can take the pool cue and double it and know exactly where to hit the pool ball off the bank. With that knowledge that I had, I thought I can play Lenny, who is expert pool shark. Lenny had much more dimensions to his model than my model and soon took my money. And I went home back to my physics major and learned that, basically, theory and implementation are somewhat different. So they will enhance the model and the returns in the first model will be less than anticipated. So over time, the model returns are dissipated because it’s difficult to hide superior information from other market participants who are always looking to profit from the expertise of others. They will trade against the model and beat it, you know. And so even if you look at golf or other sports, the great stars are always learning from the skills of the past and becoming better, increasing the dimensions of the way in which they implement their game.
So one monitoring device that is used is a historical track record, and not simulated track records, to gauge the efficiency of claims and superior performance. So you need a track record. The problem with a track record, however, even though it does break the link between data mining and cherry-picking better strategies and actual distributions of possible outcomes is that the longer you need the track record to become trusting, the more the track record in the future will not be the same as the past because others have discovered what you’re doing and eliminate the strategy or understand the constraints of others and innovate and create new strategies.
Some managers incubate, which is unfortunate, many strategies to game the control system. They incubate 20 different strategies and build a track record and then move forward promoting the track records and the strategies of the successful strategies even though the cross-section of strategies was just randomly picked. And you can pick their winners from the cross section the same way as you can look at the winners in any activity and say, “Will they be the winners in the future?” No, unless you understood what the constraints or what was the value added that was being generated.
Even, for example, a five-year track record might mask the actual risks of an underlying strategy. Many strategies are forms of selling insurance – that is, earning premiums or extra returns most of the time, and occasionally, however, suffering a large loss. Many option strategies fall into this category. Holding higher yielding bond investments provide positive returns. They provide … every year or every month you make an extra return called the carry, but occasionally suffer a large drawdown with the false or bad economic conditions. If the previous track record period did not include these events, the track record is not good; it’s not a good predictor of the future because you didn’t use enough historical data to judge what the future will provide.
Many so-called “smart beta” strategies – which search for factors such as the dividend yield, values, small stocks – suffer from the same problem. Strategies might be successful for many years and then fail completely, or the strategies might have been successful and others emulate them. They reverse engineer or copy them and the supply increases, destroying what the constrained value was because speculators come in to carry those risks forward by investing in those strategies.
Most economic theory or statistical acumen … without economic theory or statistical acumen, investors are fooled and disappointed with investment results. They lose trust in those that they delegate to for management of their investments – and in investment managers generally. So that fact that investment managers are cheating in some extent means that trust is lost even for those investment managers who have the trust, there’s a higher level that’s necessary to overcome, to build the trust. An example, to show the contrast between theory and statistical measurement that I am fond of, is that of the naïve bettor who watches the winners at the roulette table in Las Vegas and notes than an individual bets on a color, such as red or black, and on red if black comes up, and then doubles the bet because that bettor just lost, on the next roll that, with enough money, he always wins. He never loses. Statistically, you keep enumerating the number of wins, the number of wins, the number of times you win, you win forever it seems. But the statistician, the person who is the theorist, looks at the table, knows there’s 18 reds and 18 blacks, but two greens, so it’s an unfair game. So, if the bettor always wins and the house never loses, how can the house stay in existence? The house wins. Either the bettor runs out of money or, basically, the bettor gives up and stops. And both the house and the double bettor can’t win.
This is not the same, however, as counting cards in blackjack, because the bettor has the odds in her favor and plays many games simultaneously. So, an investment … is the bet a fair bet or an unfair bet? Or is the bet a fair bet because you understand what the risks are and why you’re getting paid to make the bet? In card counting, you know you’re getting paid to make the bet because the odds in your favor. If you can do it enough times, you’ll win and that is a big difference.
Many services provide myriad statistics decomposing past performance for investment allocations to judge their managers’ performance. They provide historical scenario analysis to observe drawdowns at times of previous market shocks such as the dot-com collapse of 2001 or the financial crisis of 2008 or the March 2020 COVID lockdowns. This gives some comfort that the portfolio compositions will survive large shocks with designated drawdowns. Drawdowns won’t exceed these large numbers.
For the more skeptical, however, there is a scenario that is unknown that will result in worse outcomes than anticipated. And it’s the same as the question of the dog that barks or the dog that doesn’t bark. And you’re always worried about the dog that doesn’t bark as much as we’re worried about the dogs that do bark. For example, the government crackdown of technology of companies in China coupled with a real estate collapse, coupled with economic supply shocks and lack of output because of lockdowns and COVID restrictions in Chinese cities such as Shanghai led to far greater losses than were seen in the U.S. in 2007/2008.
Scenarios provide comfort but do not cover the entire set of possible future scenarios, and investors are worried about those losses and drawdowns. Many managers quote loss statistics in terms of value at risk, or VAR, which gives the loss at two standard deviations below the mean of a normal distribution, a distribution that’s not changing and is measurable and knowable for the future. Even with knowable distributions and normal distributions that don’t change over time or at times of shock, this is a misleading statistic. It confuses investors and gives them the wrong information, leading to less trust.
For example, the S&P 500, which is a broad-based gauge in the equity markets, has a standard deviation of return measured historically of about 15% a year. The loss number of two standard deviations over an annual basis might be around 25%. When the S&P, however, had a maximum loss of 55% during 2009 and 2007/2008, and also in the 2001 dot-com crisis, also had about a 55% loss. One portfolio manager at the time of crisis broadcast that they had suffered six 10-standard-deviation events in a year. How can you have six 10-standard-deviation events in a year? That is statistically impossible to believe that could occur. The distribution was wrong; they didn’t have the right distribution. They should have had a much riskier distribution to look at.
So the idea of how you present information to educate investors is crucial. The idea of what type of information you provide may be misleading or rewarding, and the manager that wants to build trust has to give rewarding information to investors so they can trust you over time, not information that are proven to be wrong. And yet, middle advisors who do due diligence work and the like do not realize, or should realize, more about tail risk and how that might affect the trust building and the sustaining trust over time.
Monitoring foreign investors is costly. Building trust with investment managers is costly. Constraining investment activities is costly for skilled investment managers. Their investors suffer and put implicit costs in loss returns. It is necessary for us and the investor to trade off these costs. Less skilled managers will constrain their activities. More skilled managers will provide trust building. Those with valuable strategies that are built understanding the constraints of others will invest more in trust building. Thank you.
Maymin: Wow, Myron. Thank you. That was great, and I feel elucidated and illuminated and educated, but also emotionally overwhelmed. I feel sad and frustrated for two groups of people. Let’s take them one at a time. One is the investor. What’s an investor supposed to do?
Scholes: Use Google Maps.
Maymin: That will get them home, but what do they do with their money?
Scholes: The idea, for the investor, I think that relying on those investors or judging investors not by recent performance alone, but trying to understand whether trust is warranted, is important. And I do believe this is going to be much more so in the future than in the past. Because investment management is going to move and focus away from relative value investing, or the idea of outperforming a benchmark, to more objective-based investing or more solutions-based investment. Because women are taking over more and more of the wealth in our societies as our population ages. There could be a shift of close to $20 trillion in investment over the years. And my understanding is that, while men are hierarchical, generally speaking, and enjoy outperformance, that women are more objective-based or want to grow their portfolios, preserving their wealth, at the same time having other objectives in mind. Like, you know, educating the children, preserving for contingencies, philanthropy, other things they want to do. As a result of that, the switch from outperformance will move more to trust building, and moving more toward trust building means that the gimmicks and some of the things that were used in the past will fall away because that won’t be as important a criteria.
Can you build trust through providing the solutions or the objectives that are wanted by the investment generation? So, moving away to, “what are my objectives and how can I fulfill my objectives” will be an important way to try to mitigate the costs of data mining, the cost of outperformance data mining, the cost of clustering outperformance, and the cost of reverse engineering strategies. And that, I think, is important, and I think we’ll move more to try to find the better route on the road ahead to be objective-based, and away from the idea of moving only to a world of figuring out how to build trust through data mining or clustering or actually reverse engineering strategies.
Maymin: That’s comforting; it’s 50% comfort. The other people I’m still sad and nervous for are managers themselves. If you’re developing a strategy you need, I don’t know … maybe some kind of self-trust. What are some things we can do? You talked about, you know, having a model with secrets. Every manager thinks that’s the golden goose. If I can have a secret model that nobody else knows, I’ll win. But you rightly point out that eventually it’ll underperform. How can I create a model that’s not secret, and I’m not fooling myself?
Scholes: That creates an ability, then, to be an advisor for a person who doesn’t have the same skills to implement strategies as you might have as the advisor. And so it’s trying to move away from an outperformance criteria to understanding the constraints of the investment clientele. And then, given the objective of trying to have the best return experience on a compound basis over time, and given a risk dimension, to think about how to fashion a portfolio among assets to achieve that risk. So being able to work with clients to deduce their objectives is an important part of the investment management landscape going forward.
Maymin: That makes sense. And you mentioned earlier you want to have, not just data mining, but some kind of economic story explaining why you think what will happen. What are some ways … how do you know if your story makes sense?
Scholes: That is difficult to figure out whether your story makes sense, because all science is data mined, unfortunately. That, basically, the information set of our life is just too large. We have so much data to try to digest. And the way science moves forward is really by induction, not deduction. So we look at historical data, we look at historical ways things were done and then we add to that way. So we gather data. The great scientist knows when to stop gathering data and then deduce a new concept. I don’t think anything really comes from first principles. So once you deduce something from the past, the great scientist knows what data to gather and when to stop gathering the data to deduce new things. So, all in our life, we worry about whether there is new science [that is] going to replace old science and whether old science is exact. Obviously Newtonian science was based on a frictionless world and friction’s become very important to determine the future course of physics. And also the idea of going from Newtonian physics to quantum physics, you know, is another evolution that creates a whole different way in which you address a problem. So we’re stuck with the meta we’re in and we have to deduce how to move forward from that.
Maymin: That’s nice. That’s actually inspirational. So the idea is that we can still be human and creative and generate new knowledge and not just rely on some kind of single algorithm to generate strategies, right?
Scholes: I think that nature wouldn’t allow that to be true because we’d all be too bored. So I always am very interested in the AI concept of whether we’ll develop a machine that can actually figure out everything for us so we would just be in the Malthusian world sitting in our garden reading poetry or enjoying the fruits of the AI machine doing everything for us. I think that that is a false hope. You know, I can think of assisted AI, but AI is very good because what you put into it comes out of it, and it might learn more than we do and really help us add value and help our intuition. Intuition is a model, an exact replication of reality, and we can increase the dimensions of that and do much better.
When I started playing golf, I read 150 books on how to play golf and figured out every model of golf. But when I went to the golf course and started hitting the ball, I realized that there was a lot of dimensionality in the problem because of the nonlinear swing of the golf club. And as a result of that, I knew far little than I thought even after reading all these books. History and science gives us a way forward, but it’s also figuring out a way to move forward, and there’s always meta-learning over the learning we had before.
Maymin: Wonderful. Thank you, Myron.
Scholes: You’re welcome.