Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
SPY/BND z-score trade

Here's a simple algorithm that normalizes the prices of SPY & BND using a z-score. Based on the difference in SPY & BND z-scores, SPY is either bought or sold, with a long-only constraint. For the parameters chosen, over the period of the backtest, the algorithm smooths out the SPY returns, but doesn't result in an advantage over the benchmark. Perhaps someone can improve it?

Questions/comments welcome.

--Grant

9 responses

When SPY is sold would it improve the return if you bought TLT, IEF, TIP or BND itself?

Thanks Srinivasan,

I'm still fiddling with the concept. Here's another (rough) version that uses dollar-volumes, instead of prices. There is a "warm-up" period of 30 days, so the comparison with the benchmark may be kinda off.

Grant

Here's a backtest over a longer duration, also using dollar-volumes as I did immediately above. Other than the 2008-2009 downturn, it basically tracks the market. --Grant

So I made the following modifications and ran a simple test...

On the buy side, same -2 Z threshold, but the buy order changed to 25 not 100.
On the sell side, raise the Z score threshold to > 3 from the original >2.

The idea is to be unbalanced, buying gradually in the downs and waiting longer to sell the ups, but selling all if you hit a 3Z top tick.

On the original period, it showed 33.2% algorithm return vs 30.1% benchmark, with a 0.43 beta (very good), a 3.23 Sharpe (also very good), 0.20 Alpha, and max drawdown 6.6%.

With that much risk reduction, should be able to exploit the low beta / high Sharpe to get excess returns instead of risk reduction if that is what you want, simply by using modest leverage.

Needs longer backtests in market period that aren't uniformly rising, to be sure.

I hope this is interesting.

Jason Cawley
Wolftram Research

Thanks Jason,

Would you be willing to post your backtest?

I did a lot of tinkering with this approach, and hope to get back to it at some point. Seems to have some merit.

Grant

Hi Grant,

Could you provide some detail about your original algo?

My understanding is that:

  1. You use the rolling mean as the security price
  2. You look at the Z score for each of the 2 securities to find out how they perform relative to their mean, and then consider the differences between their Z-Scores

If my above understanding is valid, then what sample set do you take the Z score data of? a collection of 60 day rolling means? Is that what the as_matrix code does? Are you storing a collection of rolling means in a matrix, to later take the z-score of?

If so, how many 60 day rolling means do you look at, 5 (window length)?

Thanks,
Joe

Hi Joe,

Thanks for your interest. I've attached what should be a faithful representation of the algorithm posted initially (note that in a comment I refer to VTI, but it should be SPY). I put in the changes suggested by Jason above, and also do an initial buy into SPY with all of the cash (as a work-around due to the lack of a backtest "warm-up").

In the batch transform, I have:

window = 60  
p = pd.rolling_mean(data.price[sids],window).as_matrix(sids)[window-1:]  

This should be performing a rolling mean over 5 days of minute-level data, with a window for the mean of 60 minutes. The .as_matrix(sids) just returns the data as a numpy ndarray (and I am slicing off the empty rows at the start with [window-1:]. I'd encourage you to verify all of this for yourself--please let me know if I made a mistake!

And, yes, the rolling means are used for the z-score calculation.

The number of rolling means used in the z-score calculation is 5*390-59 = 1891 (you can confirm it by adding a print p.shape line in the batch transform).

Hope this helps. Please see if you can improve the algorithm and post your result here.

Grant

Here's the performance of the algorithm immediately above, from the start of 2011. Perhaps the next step is to use "modest leverage" as Jason suggests above. --Grant

Here's the result with:

context.max_notional = 100000.1  
context.min_notional = -100000.0  

Grant