Garbage in, garbage out
Consider two different quantitative investment strategies: stat arb and multi-factor quant.
A stat arb strategy will primarily utilise technical data to determine what trades to perform. This data will always be up-to-date. Care needs to be taken when there are large share moves on large volumes as this share price action may be driven by events such as corporate actions, which may require manual intervention. However, this is relatively straightforward.
A multi-factor quant strategy will utilise a broader range of data, including reported financials and analyst data. There is a time lag between when this data changes and when the revised data feeds into the quant factors incorporated into the investment process.
To illustrate: if a company announces an event which materially impacts earnings, the share price reaction will be immediate, but there is a time lag before analysts revise their models and this data is submitted to data vendors. The interim represents a potential Garbage in, garbage out scenario. The stock’s valuation will change, but there won’t be any change to analyst revision factors. Also, auto-correlation momentum factors typically exclude recent performance due to the potential for short term mean-reversion.
Hence, unless corrective action is taken, a systematic multi-factor quant investment process is likely to buy following a negative event (and vice versa), regardless of the circumstances. Not only is this counter-intuitive, it’s likely to detract from performance for many stock events. Following a profit warning, for example, stocks tend to continue underperforming.
Dealing with this scenario effectively requires manual oversight. Trades identified by systematic component of the investment process may need to be reduced, cancelled or, in some cases, reversed.
Before executing trades identified by our quantitative stock selection and portfolio rebalancing process, we review the alpha drivers. We understand the potential limitations resulting for data lags and the impact different events, such as profit warnings, are likely to have on subsequent stock returns. We believe this provides an important safeguard against trading based on spurious data.
Differentiating between revisions with high and low signal content
As with all quant factors, it’s important to understand why revisions factors the work. It’s not because the market is slow to react to analyst revisions; rather, it’s because they trend over time. And it’s the trending in revisions which leads to trending in returns.
The reasons for why analyst revisions tend to trend over time is discussed in my quant factor investment book:
“Analysts are aware of where their forecasts are positioned relative to their peers and they’re reluctant to stray too far away from their peer group. Let’s assume the consensus EPS forecast is $1.00 and the highest analyst forecast is $1.10. If, based on new information, an analyst thinks the company’s EPS may be $1,20, rather than revising her forecast to this level, she is more likely to move to just above the highest forecast, say $1.12. if she is wrong, the reputation risks associated with doing this are less severe than they would be for an outlier forecast. And if she is right, she still has the highest forecast. Other analysts covering the same company tend to follow the same herding instincts and this often leads to trending in EPS forecast changes. And given the market follows EPS forecasts closely, this can lead to trending in stock returns.”
Trending in stock returns is more likely to occur when revisions are based on company specific data (eg higher sales driven by new markets), rather than macro data (eg a change in interest rate assumptions). Manually reviewing sell side research can help in making this determination and this is part of our discretionary stock analysis for companies that have experienced significant revisions.
Identifying value traps
Value factors have the strong intuitive appeal: if two companies have the same share price but one company generates stronger profits, prima facie this company looks more attractive as an investment opportunity. However, this is overly simplistic in that there are myriad reasons which justify why companies trade on different earnings multiples. Some of these reasons can be measured quantifiably and incorporated into a robust systematic stock selection process. For example, our value alpha screens combine sophisticated valuation factors with growth, certainty, and profitability factors to provide a robust framework for differentiating between cheap and expensive stocks.
However, not all the pertinent issues can be quantified. Examples include fraud, regulatory risks, pending litigations, and patent risks. Our review of deep value socks seeks to address these issues so that we can make an informed assessment as to whether our value factors and value-based screens are identifying appropriate stocks given the mispricing opportunity we’re seeking to exploit
Comments