Frequently Asked Questions
What’s wrong with FactSet’s attributions and risk estimates and how can you prove it?
Brinson/Active Share fail to consider differences in individual security market and sector Betas, as a consequence, they fail to properly separate active from passive performance. The problem can be demonstrated by observing non-zero correlations between security selection and passive return components.
Security-selection return is defined as a residual (that portion of incremental return unexplained by various passive market exposures) so by definition it is uncorrelated with passive benchmarks or exposures. To the extent security-selection return and passive return calculated by a given system are found to be correlated, the system has failed to properly isolate active contribution and will fail to detect skill and active risk.
A simple example may be how Brinson attribution will deal with a leveraged passive ETF. If SPY returns +10% and a 2x leveraged SPY ETF returns +20%, the Brinson approach will attribute +10% to security selection, instead of the passive market effect.
Returns Based Style analysis estimates average exposure over time and fails for active portfolios in which exposures change through time. The problem with RBS can be demonstrated by comparing current predictions with future performance.
In fact, risk and skill analytics are only valid to the extent they are predictive. Skilled managers detected by Brinson attributions should tend to outperform in subsequent years, and risks estimated by RBS should reasonably accurately predict future return.
In our tests, we’ve found neither of FactSet’s approaches to be predictive.
Why is your approach better and how can you prove it?
Robust estimates of point-in-time betas overcome limitations of Brinson/RBS and result in predictive attributions and risk exposures. This is demonstrated by the persistence of security selection skill as well as by the correlation of returns predicted by current exposures with future realized returns.
If RBS fails to capture changing exposures, why not just look at rolling regressions?
Regressions provide an estimate of average exposure over a period rather than exposure as of a point-in-time. To the extent exposures change significantly during the period (as with active portfolios) averages may be a poor approximation for any given point-in-time exposure. Both attributions and portfolio risk estimates require point-in-time exposures.
Rolling regressions would simply increase the amount of bad data, for example, consider a manager who ran a 1.5x market beta portfolio last month, but got worried this month, and switched to a 0.5x market beta portfolio. A rolling regression will produce garbage output in such cases.
Are your tests time period dependent? Why should we be confident in your data?
You don’t need to have any confidence in our data. By testing a replicating passive portfolio for a manager, you can see the predictive effectiveness of the models for yourself.
In general, you are right to be skeptical and should only trust risk models and performance analytics that you can test out-of-sample yourself, such as with the replicating portfolio tests that we advise.
Some analytics define point in time risk as the volatility of a portfolio given its holdings and their recent returns until that point in time. Is this estimate accurate?
This is a reasonable approximation of current risk, and by extension VaR, however it does not identify any of the market factors that contribute to current risk – and knowledge of underlying exposures is critical.
Three main reasons:
- If you don’t know what the sources of risk are, you don’t know what can be done to make changes and mitigate any problems. For example, If two portfolios have +10 or -10% statistical exposure to Emerging Markets, the tracking error and VaR will be the same, but the actual risks (and remedies)are the opposite of each other. It’s not terribly helpful to know what current risk is if you don’t know what measures can be taken if that risk is too high or too low.
- Current risk, without knowledge of market exposures, cannot be used for stress testing over different market regimes or historical periods.
- Portfolios with equal risk defined by recent history may have very different underlying exposures, which coincidentally have had the same recent volatility, and those exposures may have completely different long-term risk profiles which would remain hidden with the more simplistic approach. Hidden exposures to market factors that have had uncharacteristically low recent volatility may seriously underestimate true current risk. Modeling tail risk based on standard deviations, without quantifying sources of volatility, risks missing the forest for the trees.
Do you run your factor-oriented model on holdings, observed historical returns, or both?
Our factor models are built for individual stocks by analyzing observed historical returns. The regression of stock returns on factors calculates stocks’ factor exposures.
These individual stocks’ factor exposures are then aggregated for a portfolio using holdings data to estimate portfolio factor exposures over time.
In summary, the analysis uses both holdings and returns: returns of individual stocks to estimate the factor exposures of stocks and portfolio holdings of individual stocks to estimate the factor exposures of portfolios.
Your white paper states that your approach has 0.96 median correlation between predicted ex-ante and reported ex-post portfolio returns. This, I guess, is total return, and not excess return. Also, I assume that you need to know the risk factor realization for the future, and it’s not a pure “prediction” right? Still, of course, very impressive correlation.
Yes, 0.96 median correlation between predicted ex-ante and reported ex-post total return. The median correlation between predicted and actual excess return is 0.66
Another way to say this is that 0.96 is the correlation between the replicating passive factor portfolio constructed using the model and the subsequent actual portfolio returns. This replication factor portfolio does not imply any knowledge of the future factor return realizations.
The paper shows Apple’s 2.3 sector beta… This surprises me: over what period, and relative to what sector index?
This is beta to the technology sector index, as of 12/30/17, and is based on returns over the previous three years, with a decay factor.
Note that we analyze sector exposure separately from the Market exposure. Your intuition that AAPL has a lower overall risk is correct — its market exposure is ~1. So AAPL has ~1 Market beta and ~2 Technology beta after controlling for Market risk. The ability to measure Market and Technology exposures of AAPL independently, and not assuming that they are equal, is a critical edge of our and other statistical factor models.
Our technology factor is the cap-weighted index of all U.S. Technology stocks. It is materially identical to the Russell 3000 Technology Index. In practice, the Technology Select Sector ETF (XLK) is also a good proxy. We can share out a simplified model illustrating this relationship using AAPL, SPY, and XLK, if it would be helpful.
Given your example of Apple stock having a 2.2 beta to the sector, how persistent or how long can you reasonably forecast that stock keeping that beta?
The change in betas over time differs across companies. In the case of AAPL, the following is its Sector Exposure (Beta) over time:
The beta changes over several regimes, but remains stable for some time within each regime. It’s interesting to note the change in tech beta in ‘07 when the iPhone was introduced and transformed the company from an idiosyncratic niche player to the driver of the industry’s profits.
What’s equally important, we know that these estimates are unbiased predictors of the subsequent realized betas. So, even as the betas change over time, our estimate at a given point neither over- or under-estimates Market and Sector betas.
What is the criteria for the 0.96 correlation? For example, is it more binary in nature either the actual hit the exact predicted number or it didn’t? Or is there still some issue with P-hacking, having large confidence intervals that make the actual more likely to fall within the predicated range?
This is a Pearson’s correlation (https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) of predicted and actual returns. There is no funny business.
Put differently, our past factor exposures explain about 96% of the variance of subsequent monthly returns.
While it is always wise to be concerned with P-hacking, the above results have held up out-of-sample over several years, thousands of funds, for new institutions, and in new markets.
That said, the best way to prove effectiveness is to take a few sample portfolios, have us replicate them, and check for yourself how well these predictions hold up in the future as well as how they compare to the predictions of other analytics vendors and consultants. The only way to prove the effectiveness of a system or to compare it to that of other systems or processes is to benchmark all out-of-sample and compare the effectiveness of predictions.
Can you explain how the passive ETF replicating portfolio is constructed? Is there a static allocation to a group of various ETFs over time, or do the allocations and types of ETFs used get rebalanced over time?
The particular ETFs used as factors in our risk models are constant (market, sector, style, and bonds for the US model) and all are available passively, which is key. The passive component of incremental return is based on the average exposure (beta) over time (ten years assuming sufficient holding data) to each factor. The timing component is that due to variation in factor exposure, and security-selection is the residual relative to the return calculated by the model. The difference between the model’s calculated return and the portfolio’s actual reported return is also shown as trading/unexplained.
Your question goes to the core definition of active return from stock-picking vs. active return from stock picking and factor timing. You can do either:
1) If you construct a single replicating ETF portfolio and never rebalance it, then the performance of a fund relative to this portfolio would be due to both factor/market timing returns (returns due to variation in systematic risk) and alpha/residual/stock picking returns (idiosyncratic returns unattributable to systematic risk).
2) If you construct/rebalance replicating ETF portfolios periodically to capture variable systematic risk over time, then the performance of a fund relative to this portfolio would be due to alpha/residual/stock picking returns (idiosyncratic returns unattributable to systematic risk).
Over a short period, such as a few months or a few years for low-turnover managers, factor timing returns are immaterial. The second approach, which we take, isolates stock-picking from timing, also ends up being a larger and more persistent source of active returns for most managers.
A few weeks to a few months of tracking a portfolio against a static replicating ETF portfolio without any rebalancing should be sufficient in most cases to validate the predictive value of our models.