@Grant, yes, your analysis is right on. Following your last strategy contribution, I extended the trading interval to 6 years, made no change to the code. Saw the strategy's CAGR melt down to about 2.1%. It was not surprising since: E[R(p)] = r_f + β∙(E[R(m)] - r_f) – β∙E[R(m)] = r_f. Implicitly saying that the outcome of this type of strategy should result in something close to the risk-free rate.
The merit of the strategy is that it is getting a low volatility equity line (almost flat) with its corresponding low drawdowns and near zero beta. It should be considered a real plus. There are applications for that kind of behavior.
However, at a 2.1% CAGR, one should start considering if it is worth the effort to do all that work, even if it is a machine doing it. In terms I like to measure, this strategy has a doubling time of 33.3 years! And this is if it does not deteriorate more as you continue to increase the trading interval.
Note that in its tear sheet, the Cumulative Return on Logarithmic Scale graph shows no real alpha either. As a matter of fact, it has an increasing negative alpha, and will continue to deteriorate. This strategy simply underperform its peers. And, this might be viewed as, say non-constructive, but, it is nonetheless designed to do so.
In my last post, I was generous when saying about 30% of trades were the result of noise rebalancing. That number is much higher. Every day, all the positions are affected simply because one of the stock price moved, and this has a cascading effect over the entire portfolio over the whole trading interval.
I do not see how, anyone could leverage this script to 6x. It simply would not even cover its leveraging costs. One should look closer at what this strategy is really doing and why it is doing it.
You have this moving blob of price variances where you want to extract, by trading, something that should result in: F(0) + Σ(q∙Δp) > B&H. And all you get is: F(0) + Σ(q∙Δp) ≈ r_f.
I would say: “Houston, we have a problem”.