Lopez de Prado recently published a paper titled "Building Diversified Portfolios that Outperform Out-of-Sample" on SSRN, you can download it here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2708678. In it, he describes a new portfolio diversification technique called "Hierarchical Risk Parity" (HRP).
The main idea is to run hierarchical clustering on the covariance matrix of stock returns and then find a diversified weighting by distributing capital equally to each cluster hierarchy (so that many correlated strategies will receive the same total allocation as a single uncorrelated one). This gets around the issue of inverting the covariance matrix to perform more classic Markowitz Mean-Variance optimization (for more detail, see this blog post: http://blog.quantopian.com/markowitz-portfolio-optimization-2/) which in turn should improve numerical stability. The author runs some simulation experiments to show that while Mean-Variance leads to the lowest volatility in-sample, it leads to very high volatility out-of-sample. The newly proposed HRP does best out-of-sample. For more details on HRP, see the original publication.
While I like the approach of using simulations, it is of course also of interest to compare how these methods perform on actual stock-market data. Towards this goal I took 20 ETFs (the set was provided by Jochen Papenbrock) and compared various diversification methods in a walk-forward manner. Thus, the results presented below are all out-of-sample.
Specifically, we will be comparing:
- Equal weighting
- Inverse Variance weighting
- Mean-Variance (classic Markowitz)
- Minimum-Variance (Markowitz which only takes correlations into account, not mean returns)
- Hierarchical Risk Parity (by Lopez de Prado)
The HRP code was directly adapted from the Python code provided by Lopez de Prado.
In addition to the above methods, I also add a "Robust" version of the last three weighting techniques. This uses the original technique but instead of computing the covariance matrix directly, we apply some regularization using scikit-learn
and the Oracle Approximating Shrinkage Estimator (http://scikit-learn.org/stable/modules/generated/sklearn.covariance.OAS.html). In essence, this technique shrinks very large values in the covariance matrix to make the estimation more robust.
There is whole lot of code to compute the weighting in the beginning, you can just skip directly to the results at the bottom.
Disclaimer:
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory or other services by Quantopian.
In addition, the content of the website neither constitutes investment advice nor offers any opinion with respect to the suitability of any security or any specific investment. Quantopian makes no guarantees as to accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.