Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Hierarchical Risk Parity: Comparing various Portfolio Diversification Techniques

Lopez de Prado recently published a paper titled "Building Diversified Portfolios that Outperform Out-of-Sample" on SSRN, you can download it here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2708678. In it, he describes a new portfolio diversification technique called "Hierarchical Risk Parity" (HRP).

The main idea is to run hierarchical clustering on the covariance matrix of stock returns and then find a diversified weighting by distributing capital equally to each cluster hierarchy (so that many correlated strategies will receive the same total allocation as a single uncorrelated one). This gets around the issue of inverting the covariance matrix to perform more classic Markowitz Mean-Variance optimization (for more detail, see this blog post: http://blog.quantopian.com/markowitz-portfolio-optimization-2/) which in turn should improve numerical stability. The author runs some simulation experiments to show that while Mean-Variance leads to the lowest volatility in-sample, it leads to very high volatility out-of-sample. The newly proposed HRP does best out-of-sample. For more details on HRP, see the original publication.

While I like the approach of using simulations, it is of course also of interest to compare how these methods perform on actual stock-market data. Towards this goal I took 20 ETFs (the set was provided by Jochen Papenbrock) and compared various diversification methods in a walk-forward manner. Thus, the results presented below are all out-of-sample.

Specifically, we will be comparing:

  • Equal weighting
  • Inverse Variance weighting
  • Mean-Variance (classic Markowitz)
  • Minimum-Variance (Markowitz which only takes correlations into account, not mean returns)
  • Hierarchical Risk Parity (by Lopez de Prado)

The HRP code was directly adapted from the Python code provided by Lopez de Prado.

In addition to the above methods, I also add a "Robust" version of the last three weighting techniques. This uses the original technique but instead of computing the covariance matrix directly, we apply some regularization using scikit-learn and the Oracle Approximating Shrinkage Estimator (http://scikit-learn.org/stable/modules/generated/sklearn.covariance.OAS.html). In essence, this technique shrinks very large values in the covariance matrix to make the estimation more robust.

There is whole lot of code to compute the weighting in the beginning, you can just skip directly to the results at the bottom.

Disclaimer:
The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory or other services by Quantopian.

In addition, the content of the website neither constitutes investment advice nor offers any opinion with respect to the suitability of any security or any specific investment. Quantopian makes no guarantees as to accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

66 responses

Nice! Is it fast enough to use in an algo? I've had best results with sample bootstrapping of mean-variance (or whatever), 100 draws of 6 months of data from the last two years, or whatever.

Yes, should definitely be fast enough, there's really nothing fancy going on. I like the bootstrapping idea to add robustness.

Can you replace cvxopt with a library which is whitelisted in backtesting and not GPLed? Is there any equivalent functions in any of the whitelisted numerical libraries?

cvxopt should be whitelisted in backtesting. In any case, what the results show though is that you definitely don't want to use Markowitz. The Hierarchical Risk Parity only needs numpy.

A couple of thoughts. The universe of 20 etfs have been chosen to be somewhat diversified:

symbols = [u'EEM', u'EWG', u'TIP', u'EWJ', u'EFA', u'IEF', u'EWQ',  
           u'EWU', u'XLB', u'XLE', u'XLF', u'LQD', u'XLK', u'XLU',  
           u'EPP', u'FXI', u'VGK', u'VPL', u'SPY', u'TLT', u'BND',  
           u'CSJ', u'DIA']  

This selection could put equal weighting at an advantage vs something which looks at correlations.

Second thought is why use 1/var? I realise it has some mathematical purity, but a more common approach is to weight with 1/sqrt(var). For me, this has some practical logic to it. If broadly speaking most asset classes have the same sharpe ratio over the long run, then 1/sqrt(var) weighting normalises all assets so they have the same mean return over the long run.

Thanks for the comments, Dan.

Regarding selection of the universe: Looking at the correlation matrix there definitely seem to be some very correlated clusters in there. And the equal weighting does not perform favorably at all either. Your point could be made about the inverse variance weighting, however, which also disregards correlations. Your general point of the universe selection introducing bias is well taken though. Would be interesting to test how these results generalize to different portfolios.

Regarding 1/var: I used it because that's what LdP used in the paper. I don't know of a good theoretical reason to prefer one over the other, although your reasoning makes sense to me. Perhaps 1/var is more optimal in terms of Kelly betting which is mean / var vs the Sharpe Ratio of mean / std but that's just a guess.

There's a good excursion on the various weighting systems in Andrew Ang's Asset Management book (chapter 5.3). He explains how the different schemes relate to each other, and how parameter instability causes problems for the more complex schemes, and in particular an interesting counter-cyclical issue with risk parity. He uses just 4 asset classes: us and developed stocks, us treasury, us Corp bonds.

Anyway, he explains easiest system is just hold market cap weighted (Cambria has a fund for this), as no rebalancing is needed. Next simplest is 1/N as you don't need to estimate any parameters, and it performs better than market cap for him. Next simplest is 1/var or 1/vol weighting as vol is relatively stable and easy to estimate. Then if you feel comfortable estimating correlations you can try minimum variance weighting (or this hierarchical clustering scheme). Finally if you are able to estimate mean returns as well you can go for full mean variance optimisation.

I have to say I have come to appreciate simplicity more and more in recent years. And come to adopt systems which can be operated without problems in a spreadsheet! If only to demonstrate their simplicity and lack of fitting to data. So, Dan H, yes your post chimes with my thoughts.

One thing I've done for a contest entry is to size my portfolio of some fraction of the liquid s&p 500 universe using risk parity (1/std). But even better was to size them as 1/n but then size the entire portfolio to (1 / std of SPY). In other words, cut down the number of parameters to estimate from N to 1. This is more stable. Even though some high beta stocks will be overweighted, the law of averages kicks in, and the aggregated effect is the same as individual risk parity sizing.

It's something to consider for the Equal Weighting approach. You could still Equal Weight all those diverse ETF, but you could size the entire portfolio as 1/std(spy). This relies on cross asset volatilities being correlated: the "contagion" of volatility effect.

Thanks for the post! very interesting!

Thomas,
I ran your Notebook with a different set of assets (removed most bonds, added other assets to increase diversity). The results are attached and show that for this set of assets the best performing methods are Inverse-Variance (of the two high yielding methods) and Min-Variance (of the four mid-yield methods).

The two Hierarchical methods perform well, but fall well below Inverse-Variance in yield and Min-Variance in volatility.

The improvement in Inverse-Variance performance was expected because this method and others were overly invested in bonds due to the very low variance of bond funds. One could argue that I rigged my set to the advantage of these methods, but that was not my purpose.

I agree with Anthony G's appreciation of simplicity. Complexity must earn its way into algorithms as it costs time in development, in maintenance, and in interpreting the results.

[edit 7/10] I should have mentioned that I really liked your notebook and am sure it will be instructive as I make some of my own.

One of the frustrations of relative performance studies is that the performance of the methods is strongly affected by the asset set.
If you limit the assets to TLT plus the nine XL-series sector funds then the Sharpe ratios of each method is much improved, except for Robust Mean Variance which drops about 40%.
Once again Equal Weighting has the highest return and Min-Variance has the best Sharpe ratio (~1.15), but the two Hierarchical methods compete well.
Robust Hierarchical approaches Inverse-Variance in return (~0.8 vs ~0.85) but achieves better Sharpe ratio (~1.1 vs ~1.0)
Hierarchical closely tracks Min-Variance in return (~0.65 for both), but trails in Sharpe ratio (~0.95 vs ~1.15)

Adding RWR and the three commodities strongly degrades the returns of the above methods. Only Min-Variance preserves a similar Sharpe ratio (~1.05). This is probably due to the fact that Min-Variance heavily weights a small set of assets (3 assets account for >90% of portfolio weight), so adding volatile assets that will receive low weights is not a big factor.

Question to the Q community: How can one fairly select asset sets for use with a particular method while avoiding a bias toward prior results?
I've perused several backtesting papers that provide a compelling case that this is a big problem in asset weighting and/or rotation schemes. I've not found one that provides an accessible solution.

As I mentioned elsewhere I have recently found great comfort in testing a few momentum / tactical allocation strategies going back to 1900 and beyond on freely available monthly data. You can also get monthly data going back to 1972 on 10 or more MSCI indices. B of E, Fred and the NBER have all this stuff including ancient commodity prices.

If my simple techniques work on over 100 years of data including the Great Depression that is good enough for me to trade.

Switching on a 12 month momentum look back between Robert Schiller's dividend adjusted S&P 500 and the 10 year aaa Corporate bond since 1900 created a CAGR of 8.9% and max DD (in 1929) of 35%. Annualised monthly vol was just under 10%.

Cut that back to a maximum allocation of 50% to equities at any one time and the risk adjusted return improves. CAGR 7.48%, max DD 17%, vol 5.62%.

Hard to beat those sort of figures. Makes me wonder, it really does. What in god's name are active fund managers paid for?

No need to go too exotic especially since stock markets are so highly correlated.

Over that sort of time scale the problem takes on a pleasingly robust solution.

Diversify over equities and bonds in major markets and live with the uncertainty? 😃

@peter

You could start with something simple like stocks, bonds and gold (spy, Tlt and gld). Equal weighting should do ok, and you can see how the other methods compare. If a method doed something that looks very unstable or has a very low sharpe relative to equal weighting, then you can probably just chuck that method out. If it can't cope with such a simple problem, how's it going to cope with a more complex one?

You can then add more variants on stocks. For example Nasdaq and mid caps. Equal weighting will then start to overweight equity and the sharpe should fall. Which methods correctly determine to cluster or otherwise allocate a smaller amount to each flavour of stocks?

Thanks for the reply. Diversification with some modest weight tuning does seem to work most reliably in my testing. I suppose that the young engineer in me hopes that better optimization is possible while the old engineer in me sees that poor signal/noise, observability and controllability are all acting to make this hard for portfolios.

I picked the assets that I would choose for a diversified static portfolio, in weights that I would choose for a static portfolio, informed by past long term correlations, then layered alpha signals and risk parity on top of those static assets and weights.

To add, this gives you a natural benchmark as well, that static portfolio.

Simon, baselining is another topic that could use a good group chat. I believe that we are currently limited to a single equity to serve as the comparative baseline. It would be interesting if we could define a separate baseline algo or could select from a set of simple predefined models.

Per your approach one such static portfolio would be the Equal Weight included in this study. That portfolio annoyingly had higher yields that the other methods over the 8 asset sets that I tried.

Dan H,

You could start with something simple like stocks, bonds and gold (spy, Tlt and gld).

I would start with more simple one - plane vanila: stocks and bonds (SPY-TLT or my favorite XLP-TLT) and add more one by one only if it improve my goal function.

https://www.quantopian.com/posts/help-me-i-dont-know-what-to-do#56aeb593e3f99b3aa900034a
https://www.quantopian.com/posts/etf-rebalance-monthly-based-on-momentum#56aff77af047b2048a000139

Plain vanilla fixed ratio stock-bond (XLP 0.55:TLT 0.45) portfolio re-balanced monthly
START
08/01/2002
END
07/08/2016

Total Returns
200.99%
Benchmark Returns
202.1%
Alpha
0.11
Beta
0.18
Sharpe
1.63
Sortino
2.35
Information Ratio
-0.01
Volatility
0.08
Max Drawdown
15.8%

It may be a natural benchmark as well.

"Per your approach one such static portfolio would be the Equal Weight included in this study. That portfolio annoyingly had higher yields that the other methods over the 8 asset sets that I tried. "

Peter, this is something highly intelligent people will not accept, and will keep doing rocket science stuff to "optimize" their protfolios, when in the end, countless hours will have been wasted on coding/testing, and whoever proclaims the above will be treated as a heretic. It is the ultimate "quant paradox".

Are the strategies described and discussed in this thread all long only, or are there any that are actually suitable for the Q contest?

@BehaviourialTrader

I totally agree on this point. It's easy to get sucked in and chase returns. The only one I think works well enough to justify the additional complexity is risk parity, which I define as 1/vol. In my view, it's sound, as it puts all asset classes on a level playing field, no matter what their internal leverage (e.g. stocks and commodities are levered whereas physical gold is not).

It still leaves a decision on which assets to use, and whether to treat them all equally. For example, most people would put US equity and Developed Market Equity in the same bucket, but what about Emerging Equity? Or what about all the various commodity markets? If I stuck to 1/n and included all the commodity markets available, I might end up with a portfolio with a lot of exposure to commodities, which I may not want. I think at this point, the easy thing to do is revert to a manual system. For example, equities share one unit of risk, government bonds share one unit, corporate bonds and property one unit, commodities and index linked bonds one unit. Perhaps this is where correlation analysis comes in, but on a longer term basis to set fixed weights, rather than a dynamic allocation method.

As an aside, this gives an interesting justification of risk parity:

https://www.panagora.com/assets/PanAgora-Risk-Parity-Portfolios-Efficient-Portfolios-Through-True-Diversification.pdf

It uses "loss contribution" as a way to assess if a portfolio is tilted towards one asset.

As previously pointed out, the asset set strongly influence the performance of the methods. So I thought it might be worth citing this: Random portfolios: correlation clustering.

They compare performance of random equity portfolios built in four different ways: min-variance, k-means clustering, hierarchical clustering and pure random.

Dan H, I agree that 1/vol might produce smoother returns. But, if you rebalance a 60/40 stock bond portfolio, you NEED volatility to make money because you need to buy low and sell high. The paper ignores rebalancing, which is a key ingredient in maintaining a 60/40 to begin with. Right?

Luca, nice try on the blogger's part - good research, however, random selection is only marginally outperformed. I do not think that people should spend time (to code/test/research) or money (for hedge fund advice) to obtain the additional 1% annual return, while paying 2/20 for it, makes no sense, right?

Actually, don't think of 1/vol as reducing volatility, as you can set this at any level you like. In fact, you're likely to increase the vol of bonds to get them to parity with stocks.

The return from risk parity comes from the rebalancing. By normalising the volatility you create more opportunities for rebalancing to buy at "value". Without levering bonds to increase their volatility, you'll find the rebalance goes mostly just one way (bonds->stocks). With risk parity, you'll get the opposite as well (stocks->bonds).

Actually, you can "set it at any level you like" only by adding leverage to the less volatile assets, the way the author has done in that paper. Note that he is not rebalancing, which is a major drawback in the approach, that paper, and its real life applications. Just my 2 cents.

I've noted a lot of almost ideological debate about leverage, especially for bonds. Personally, I come out strongly in favour of using leverage, if managed properly. For me, there's no difference between investing in listed companies that lever their equity 2x or levering an investment in bonds 2x.

Indeed, I put my money where my mouth is. My pension is in a risk parity fund that levers bonds.

Black Swans and leverage make a potent and interesting combination.

I can't see anything ideological about it. Leverage the wrong instrument at the wrong time and you are toast. As untold entities have found to their cost.

The ideological part comes in when it turns from an argument of degree (how much leverage is too much) to a black-and-white argument (leverage is bad). At some point you have to accept risk -- black swans can exist in every asset class or situation, hence the reward. I don't think 2x leverage on bonds is too much risk, if it's well diversified and part of a wider portfolio. Same as it's OK that companies lever their equity with bank debt, as long as you (the equity investor) diversify across companies and sectors.

Do you have any examples of any infamous blow ups with someone using 2x leverage? I often hear LTCM given as an example, but that is clearly a total different kettle of fish. They were something like 30x levered when they blew up.

I have great respect for Asness, but he like all other hedge fund managers, is trying to sell the sizzle. I agree with most of what he says, except what Anthony said, which is....that Greek bond fund you just levered X-times should be returning 30% per year, until it is not, and you lose your principal. And oh, by the way, that default just affected stuff you never thought it would, like your Irish bonds and some French bonds as well. And next week the contagion will spread to South America, and then you will have lost most of your investment.

If I were Quantopian I would prevent leverage from being used in testing, contests, and qualifications for the hedge fund. They can apply leverage to the overall offering if they choose to - you don't need rocket scientists to add leverage.

Very interesting!

Dumb questions to the experts (I'm a newbie here) -

1) how can I change rebalancing frequency? I see it's "1BM" now standing for every month, what if I want to see quarterly/semi-annually/annually, what codes shall I change?

2) If I want to add boundary constraints to the optimizations, say max 20% in each security, how can I do that?

If someone could help write an example code, that will be super appreciated. Many thanks!!

Great notebook Thomas. Thanks for the insightful work.

My only two cents is that in the case of a portfolios containing high number of assets (example 100 long/100 short) the idiosyncratic risk of a single position entering a volatile period while having been very quite in the past is very high. So it seems that in these cases at least the risk of over-weighting by one asset vs. the other can or should be avoided regardless of the past data. It comes down to how much randomness we think exists in any single asset. In case of diversified ETFs I think these approaches become more relevant as the underlying is already somewhat diversified. Would be interesting to see how these perform in larger portfolios such as the S&P500 list of stocks for example.

I personally have been struggling to see if any portfolio optimization can add value vs. risk of increased volatility in various market conditions -- compared to simple equal weighting. We know correlation is not necessarily causation and this does apply to measuring historical variances as well.

Echoing others on this thread - thanks Thomas for an exceedingly useful notebook! I saw LDP present his paper at Quantcon 2017 and was really impressed by the approach (which led me to this posting a bit late).

I have the same question (2) as Rainie Pan. How would you approach adding constraints to the weightings? A main benefit purported by something like HRP is better out of sample performance by avoiding the crazy "optimal" values returned by markowitz.

Some of the weightings implied by using this method would be anything by practical to implement.

Thanks again and thanks for any added suggestions on how to constrain

Thomas,

I think I found a bug in your code.

def corr_robust(X):  
    cov = cov_robust(X).values  
    shrunk_corr = cov2cor(cov)  
    return pd.DataFrame(shrunk_corr, index=X.columns, columns=X.columns)  

In the above method, X should be a covariance matrix, not a correlation matrix.

But later in your code, you have this:

 ('Robust Hierarchical weighting (by LdP)', lambda returns, cov, corr: getHRP(cov_robust(cov), corr_robust(corr))),  

Seems like it should be:

 ('Robust Hierarchical weighting (by LdP)', lambda returns, cov, corr: getHRP(cov_robust(cov), corr_robust(cov))),  

@Thomas,

Thank you for sharing this nice notebook. For the robust covariance matrix - OAS assumes that data is Gaussian. However, we know that returns are far from Gaussian in practice in pretty much every asset class. I wonder if sklearn.covariance.LedoitWolf could be a better option in practice?

Thanks.

@Will Z: Good point, I did try LW before and I think it works just as well.

@Jonathan Ng: Good catch, many thanks. Want to post a fixed version?

A noob question here. I am yet to read the paper and go through the implementation but is there a way, I could use HRP with maximize alpha signal? I don't see a way to set alphas in this method:

def getHRP(cov, corr):  
    # Construct a hierarchical portfolio  
    corr, cov = pd.DataFrame(corr), pd.DataFrame(cov)  
    dist = correlDist(corr)  
    link = sch.linkage(dist, 'single')  
    sortIx = getQuasiDiag(link)  
    sortIx = corr.index[sortIx].tolist()  
    # recover labels  
    hrp = getRecBipart(cov, sortIx)

    return hrp.sort_index()  

@Thomas I think there may be another bug in your code: when you have

    covs.loc[eom] = rets.loc[eom-pd.Timedelta('252d'):eom].cov()  
    corrs.loc[eom] = rets.loc[eom-pd.Timedelta('252d'):eom].corr()  

you are not really finding the covariance/correlation matrices for the trailing year... rets has a frequency of business days, so '252D' gets you something less than a year. What you really want is either '365D' or '252B'.

For what it's worth, there may be a succinct way to get the rolling covariance and correlation matrices, as you suspect. If you read closely though, it's quite wasteful, but such is the price of one-liners...

    covs = rets.rolling(252).cov().resample('1BM').apply(lambda window: window[-1])[12:]  
    corrs = rets.rolling(252).corr().resample('1BM').apply(lambda window: window[-1])[12:]  

How can weights for short positions be calculated correctly? Would the following be valid for a portfolio holding long/short positions?

  1. for short positions - invert the price history (ie a price series of 20$, 22$, 21$ would translate to 20$, 18$, 19$)
  2. for long positions use the price history as is

Tom: I think that's correct. Alternatively, you could just flip the signs in the correlation matrix of the row and column of stock you want to short.

@Tom, let me try to put it delicately. Your first option ends up to be total nonsense.

What would be your reverse price, for instance, if the original price series went up to 60 or higher?

@Guy: It's not nonsense, and please pay attention to a respectful tone. Essentially what he means is to invert the returns.

@Thomas, there was no disrespect intended. I simply had no better word. A simple exercise would have demonstrated that. So, here it goes.

The price series below was randomly generated. As such, there is no way to know where it is going next. I did not even extend the price series to see it fall close to zero. Nonetheless, the return inversion is far from smooth no matter how we would like to describe it.

The doubling in price renders the price inversion series negative, and you have no way to guarantee its return inversion behavior at this crossing. That you are going up or down, you are in for a shock. And the random price series selected, even though compressed to show the impact, was, could I say, subdued, almost tame. It does get worse the closer the price series oscillate near its doubling price.

So, I maintain my previous conclusion: @Tom's option 1 is not a good idea. It will simply blow up and render whatever trading strategy he may design on this option not only unpredictable but also totally worthless.

If @Tom, or you, have a different twist to this, be happy to hear it.

Yeah, it needs to happen in returns space.

Anyone managed to put any extra allocation constaints in the algorithm, i.e. maximum allocation to a single asset of 10%?
I opened a stack exchange question regarding this too.

@Thomas Wiecki: I see the point in using a 'shrinked' version of the covariance matrix (since the allocations are based on inverse-variance any extreme value can make your results explode), but what's the point in shrinking the correlation matrix too? If two of your assets are highly correlated it's better that they share the relevant allocation, no?

Hey guys, amazing job ! Admire your work !

Did you base your analysis on price return or price level. You should use price return, right ?

That's interesting. To my opinion the original paper has several drawbacks which make the results questionable. First IVP totally ignore correlations which is not the case for HRP. So HRP has more degree of freedom to optimize. Second MVP is definitely not design to optimize on assets with jumps. So using the wrong model yields the wrong results. If you are interested I can show you how to beat HRP with the correct model.

@james rafco,

I would be interested to see your corrected model. Please attach code, if possible.

To be clear I meant a correct benchmark to use in order to make in his simulations fair. There is no way to show that one portfolio optimization will beat an other one in all market conditions.

I'd certainly be interested to see that James. This is on my to-do list: currently struggling with Black-Litterman, which does not always provide a solution (covariance matrix has to be invertible I believe, which is not always the case). I would like to use something better for the (relatively) simple problem of sector rotation when one has an estimate of future returns, if not future sector covariances ...

You can deal with covariance invertible problem just by shrinking a bit the diagonal. But if the matrix of returns is invertible (p assets > n observations) and you are using a diagonal matrix for uncertainty then you should be fine. At the end BL is just a mean variance optimization with a shrinkage on expected returns and the covariance matrix.

Hi all,
thanks as usual for your extremely helpful notebooks. There is something I don't understand. I think the X in input of cov_robust(X) and corr_robust(X) should be returns data, no cov/corr matrices, since OAS.fit(X) expects data of size (n_samples, n_features), returning the self, from which oas.covariance_ is the (n_features, n_features) shaped covariance matrix.

Am I wrong?
Thank you.

Hi Gabriele,

Good catch! I fixed the NB above. The results seem fairly consistent to before, which is a bit surprising as I don't see how it could really have worked before.

@Thomas Wiecki: This notebook has an off by one error in the results calculation, skewing the out of sample conclusions. Please shift(1) the weights forward before you ffill(), you are giving the EOM day the weight allocation calculated at the end of that day currently. As in:

Current:

port_returns[name] = w.loc[rets.index].ffill().multiply(rets).sum(axis='columns')  

Correct:
port_returns[name] = w.loc[rets.index].shift(1).ffill().multiply(rets).sum(axis='columns')

(or fix the core of the notebook, of course)

The risk is that if someone runs your notebook and changes the resolution of the "eoms" to be shorter e.g. daily or weekly, it rapidly leads to positively biased results and invalidates the conclusions you have made.

@Rob: Good catch, should be fixed in the NB in the first post. Thanks!

@Thomas Wiecki,
getHRP(cov, corr) return Series and be sorted, but get_min_variance(returns, cov) 、get_mean_variance(returns, cov) 、getIVP(cov) and np.ones(cov.shape[0]) / len(cov.columns) return array and not be sorted .

Thanks for sharing this.

Can anyone share the stats for full market cycle? These are more suitable to compare including a bear market IMO and with different asset classes.. In a bull market diversification of stocks makes sense but in a bear market diversification between asset classes make more sense.

Regarding the simple vs advanced techniques: If you are leveraged, Sortino makes more sense than total returns.

@mattory: Want to post a fixed version?

@Thomas Wiecki,
getHRP(cov, corr) , getHRP1(cov,corr)return Series and be sorted, getHRP2(cov,corr) return array and not be sorted . port_returns show that the result is the same, getHRP2(cov,corr) may be more robust.

@Thomas Wiecki,

Is it possible to do the same code without Panel() ? Thanks.

eoms = rets.resample('1BM').index[13:-1]
covs = pd.Panel(items=eoms, minor_axis=rets.columns, major_axis=rets.columns)
corrs = pd.Panel(items=eoms, minor_axis=rets.columns, major_axis=rets.columns)
covs_robust = pd.Panel(items=eoms, minor_axis=rets.columns, major_axis=rets.columns)
corrs_robust = pd.Panel(items=eoms, minor_axis=rets.columns, major_axis=rets.columns)

for eom in eoms:
rets_slice = rets.loc[eom-pd.Timedelta('252d'):eom]
covs.loc[eom] = rets_slice.cov()
corrs.loc[eom] = rets_slice.corr()
covs_robust.loc[eom] = cov_robust(rets_slice)
corrs_robust.loc[eom] = corr_robust(rets_slice)

Anyone noticed there are a number of bugs in his code? I had to copy paste so maybe issues from that but I don't think so. If anyone has pulled this into a github repo I can update as I go.

@david: In this or the original code? Anyway, it's not on github currently but would be great if you wanted to put it there and post a link so that others can use the most recent version with your fixes.

@thomas In case it is useful have put it here.

https://github.com/cottrell/hrp

Anyone try the de-noising technique LdP describes here:

https://www.youtube.com/watch?v=EufRYULIkvA

Hi Guys ! @thomas : thanks for the notebook, very interesting to understand how HRP works. However I am very surprised by the erratic behavior of your standard MVO. How comes it does not allow proper diversification at all whereas Min-Variance performs correctly?