Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Questions on trading cointegrated pairs

Hi all,

I've recently looked into pairs trading of cointegrated stocks. I looked at liquid securities (minimum median daily volume of 500K) that have price histories that go back to 2005 (at least). I wrote a script to check every possible pair out of the roughly 1000 securities and found only a single pair that was likely to be cointegrated. This really surprised me, because that's a lot of possibile pairs. Granted, I used a very large time window and I used only daily adjusted close prices (except only split adjusted, not dividend adjusted).

My question is: why didn't I find more possible pairs? My theories are: 1) I used too long of a window, 2) I should be looking at a larger number of securities (i.e. less liquid), or 3) I should be looking at minute data instead of daily data (however, I don't think I want to trade at such a high frequency).

Can anybody with experience with this technique shed any light?

Thanks,
Rudy

33 responses

Here's a paper (posted by Pravin on https://www.quantopian.com/posts/trading-strategy-ideas-thread ):

http://www.ccsenet.org/journal/index.php/ijef/article/view/33007

Skimming over it, the authors do find trading pairs and simulate a successful trading strategy. It looks like they give decent guidance on how they did their screening.

Here is a naive implementation. I have no use for it because pair trading strategies cannot be used in contest (they are not dollar neutral). I am sure it can be improved further.

Every month it screens energy stocks for possible pairs and then trades them over the month using 15 minute interval prices.

Pravin,

Could you neutralize it with an energy sector ETF (or a basket of them)? Or maybe just SPY?

Grant

Grant,

I cannot neutralize because by definition pair trading implies that stock A and B are related by:

A = intercept + beta * B (regression).

Hence, if I neutralize it with an energy or SPY ETF, the cointegrated relationship breaks.

Best regards,
Pravin

Pravin & Grant,

Why do you say pair trading strategies are not dollar neutral? It seems you can just aim for net 0 exposure. (long $x sid Y short $x sid Z).

Robert

Because $x of Y and $x of Z might not be co-integrated.

I cloned the algo and started to play around with it. Regardless of the details of the algo, it should be possible to null out beta. For example, for a long-only algo, if the return is high enough, then it is a matter of mixing in an ETF short to bring beta down to zero. In this case, a long ETF should do the trick.

Hmm? I added one line to handle_data():

order_target_percent(symbol('SPY'),0.25)  

and I'm getting:

TimeoutException: Too much time spent in handle_data call There was a
runtime error on line 39.

Somebody's gonna have to shovel more coal into the Q firebox...

@Pravin
That strategy you shared looks really nice! The contest does not require your algo to be dollar-neutral, just that you have some hedged positions and the beta is between +/- 0.3. So in the case of your algo your beta is a bit outside that band, but it could possibly be addressed in a manner which Grant suggested by using an ETF to neutralize any excessive beta tilt that results in the entire portfolio at each rebalance period (not by introducing the ETF into each individual pair's regression equation).

@Robert Shanks,
You are correct, you can look for co-integrated dollar-neutral pairs. For example, if you had $100 "Stock_1" and $50 "Stock_2" you would look to test for the cointegration of the timeseries: 2*Stock_2 - 1*Stock_1 and if it was cointegrated then the mean-reversion of the 2:1 spread would be what you trade on.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thanks Justin. I can aim for the November contest.

Grant - the above algorithm is ridden with errors. I will post a new one.

Thanks for the replies everyone and thanks Grant for the paper. I'm going to try to relax my filters to see if I can find more pairs that way.

Turns out the reason I was only finding one cointegrated pair was due to a bug in my own code (I wasn't incrementing an index variable so each new cointegrated pair was being written to the same slot in the list). Doh!

Hi Pravin,

Have you posted a new one as you "promised"on Oct 13, 2015 :-)

Another reason you might not find many pairs over a long time horizon is that assets move in and out of cointegrated relationships. It might be more effective to look for assets that are now moving into a cointegrated relationship, or change the timescale or sampling frequency of your tests.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Thomas Chang,

I gave up on co-integration and other pure quant strategies based on price data only. If you want to enter contest with million dollar portfolio it is better to focus on market neutral long short strategies using a mix of fundamentals and quantitative techniques with pipeline. Atleast that seems to be the future at Quantopian.

Hi Delaney,

Any thoughts on how to use the Q platform to screen for assets that are now moving into a cointegrated relationship? Are there any examples of how to efficiently churn over the Q database to find them (without an a priori hunches as to which pairs might be worth analyzing)?

Also, how does one know when the relationship, if it pans out, has failed? In other words, how does one know to exit a given pair?

It seems like Q would be interested in strategies that find many pairs, and can go in and out of them on a dynamic basis, so the strategy will scale to $1M-$25M with long-term, stable performance. But maybe such a thing doesn't exist?

Grant

Cointegration is good for finding pairs that have very similar factor exposures. And with that information, figure out which one quantitatively will outperform and short the other to mitigate most of the risk

Pairs based strategies are trickier because automated pair selection falls prey to a lot of multiple comparison and overfitting bias. That's why it's generally safer to screen candidate pairs you already have intuitions for as you mentioned. However, one way to approach things might be to make a large basket of assets, fetch the pricing data into the research environment using either pipeline, and then compute the covariance matrix. From this basket of candidates you'll want to figure out a way to filter down further and then run cointegration tests. I'm attaching a notebook that does the covariance computation. Note that of course you'd want to re-run a second out of sample cointegration validation on any pairs you found from this analysis. Because you'll ideally need to wait 3-6 months each time you want to do another round of screening and out-of-sample testing, you'll probably want to have other projects to rotate to. Notice that with a 95% threshold we still find about a million pair candidates. One problem is that you'll probably get a few things that are weirdly covarying with many other things, so you might want to apply pre-filters to this process, such as market cap or liquidity screens in the pipeline.

Let me know if this is helpful. I didn't have much time to put into the notebook, so it's unpolished.

As far as knowing when to get out of pairs, Justin did some work on this a while back.

https://www.quantopian.com/posts/pair-trade-with-cointegration-and-mean-reversion-tests

Thanks Delaney,

The notebook is helpful. Thanks for putting lots of comments in it.

There have been some suggestions on Q that clustering techniques might be useful for paring down the list of candidates. Any thoughts on that?

Grant

I think that's potentially a good approach. Machine learning techniques like clustering are generally good as filtering techniques AKA dimensionality reduction. The important thing is to use them for selecting which parameters you include in your model, and then do a second step of out of sample statistical validation. For example, using clustering to select 10 pair candidates, and then doing some simpler cointegration testing on an out of sample time period to ensure the relationships are meaningful.

Here's a clustering example I had on-hand (credit, I think, to James Jack for the original post, https://www.quantopian.com/posts/1st-attempt-finding-co-fluctuating-stocks - dated Dec. 1, 2012 when Q was still in diapers!).

Would it be possible with the research platform and pipeline to run this sort of thing across all approx. 8000 securities in the database? Or would we run out of memory or lose patience waiting for the computation to complete?

Based on the visualization on http://scikit-learn.org/stable/auto_examples/applications/plot_stock_market.html, the technique does a nice job of clustering companies that one would naturally think should be grouped.

Because you'll ideally need to wait 3-6 months each time you want to do another round of screening and out-of-sample testing

I don't understand this. Can't the out-of-sample period simply be a window of time prior to the current time? In other words, if the strategy is developed using data from 2002-2014, and then I backtest it over 2014-2016, wouldn't that be sufficient? If I wait 6 months from today, why would I think that the probability of the strategy falling apart with real money would be any different than if I used trailing data? Or is the idea that even if I am disciplined and do not use the out-of-sample period in the strategy development, there could be a bias?

Also, if trading is in slow motion (e.g. weekly), as would be dictated by using daily bars, then I'll only have a handful of trades in 3-6 months. So, it seems that I would still be relying quite heavily on an out-of-sample period that includes prior years.

In other words, if the strategy is developed using data from 2002-2014, and then I backtest it over 2014-2016, wouldn't that be sufficient?

Let's say you hold out 2014-16 as out of sample. Then the strategy doesn't work when tested out of sample on 14-16? In this case 14-16 is now no longer out of sample, and you'll need to wait for more data. In practice ideas almost never work on the first try, and so it might be smarter to hold out 4 x 6 month periods to use as independent out of sample tests. Of course issues with sample size and time periods being heterogenous prevent a perfect solution here.

Additionally, how many strategies have you developed that use data from 01-16? What is the probability due to multiple comparisons bias that one of these strategies looks good by chance historically? In practice waiting for new data to come in, much like doing a new experiment in a clinical trial, is the best way to validate your model. Like I said, if you keep a rotation of 6 models or so, you'll usually have one to test that's just popped up as passing the out of sample 6 month mark, and the time in between can be spent trying to improve the model and using techniques such as cross validation to minimize the likelihood it's overfit to your historical window.

Also, if trading is in slow motion (e.g. weekly), as would be dictated by using daily bars, then I'll only have a handful of trades in 3-6 months. So, it seems that I would still be relying quite heavily on an out-of-sample period that includes prior years.

This is basically small sample size. Infrequently trading strategies take way more out of sample to evaluate. In practice you want to have a mix of frequencies in your models so that you're not always waiting forever for a low frequency strategy to trade enough you can be sure it's working.

Would it be possible with the research platform and pipeline to run this sort of thing across all approx. 8000 securities in the database? Or would we run out of memory or lose patience waiting for the computation to complete?

I can't speak to this right now as I'm super tied up in QuantCon. I suspect that with close attention to memory management you could do some interesting stuff in research. In my example I compute a 5000 x 5000 covariance matrix if I recall correctly. Remember that if you want rapidly oscillating stocks, you don't need as much time in your lookback window for the covariance. Long windows are only necessary if you want stocks that are cointegrated over long time horizons. Also, stocks move in and out of cointegration, so looking at long windows may not be super useful. There's a tradeoff between the amount of time you need to convince yourself of a cointegrated relationship (longer -> more sample size -> more confidence), and each relationship having a lifespan (more time to convince yourself -> less time to trade it).

You could probably use the 5000 x 5000 covariance matrix to do a lot of cool clustering techniques using graph algorithms like you mention. That should be totally feasible with the computational resources available in research.

@ Delaney,

Regarding the in-sample and out-of-sample business, I think the main problem, as you point out, is that you only get one shot to test out-of-sample. Once the algo is run out-of-sample, then any changes based on the out-of-sample result might be tweaking to boost recent performance, for example. And it's not really out-of-sample, unless one has been living in a cave, ignoring what's been going on in the market. So, I agree that paper trading or waiting for more data may be the only rigorous approach. And besides, it'd be a bit extreme to move into a cave...damp, cool, creepy-crawlies...as a race, we've moved beyond that.

Exactly, there's a ton of subconscious bias baked into any model. My understanding is that statistical clinical research as a whole right now is moving towards the idea that there is really no way to cheat the need for large sample sizes gathered in experiments after the hypothesis was formed, and that many of the fancy techniques for avoiding overfitting (cross-validation, information criterion) should be considered more as a last resort.

@ Delaney,

As a side comment, I gather you are up the learning curve on this pipeline thingy, but I find it pretty unapproachable at this point. Just to get closing prices, there is an awful lot of code required:

class ClosePrice(CustomFactor):  
    # Here's the data we need for this factor  
    inputs = [USEquityPricing.close]  
    # Only need the most recent values for both series  
    window_length = 1  
    def compute(self, today, assets, out, close_price):  
        # Shares * price/share = total price = market cap  
        out[:] = close_price  
def make_pipeline():  
    """  
    Create and return our pipeline.  
    We break this piece of logic out into its own function to make it easier to  
    test and modify in isolation.  
    In particular, this function can be copy/pasted into research and run by itself.  
    """  
    pipe = Pipeline()

    # Add our factors to the pipeline  
    close_price = ClosePrice()  
    # Raw market cap and book to price data gets fed in here  
    pipe.add(close_price, "close_price")  
    return pipe  
pipe = make_pipeline()  
results = run_pipeline(pipe, start_date, end_date)  

Plus a bunch of stuff needs to be imported.

Guess I should slog through it...

It's high activation energy, but once you get the hang of it it allows you to do some very advanced computations quite elegantly. We're working on better tutorials for it, you might want to check out my lectures on factor stuff, especially arbitrage pricing theory.

Thanks. I'll have to work my way through a few examples, starting with the one I posted on https://www.quantopian.com/posts/pipeline-tutorial. I'll have to write my own code from scratch to sort it all out. I need to get beyond the Python class constructor, instantiation, syntax stuff first.

I have a fundamental question to ask. I'm relatively new to quant trading and started to delve into pairs trading. One question that I have is that are pairs that are cointegrated necessarily profitable or not? This is assuming no commissions or fees needed to be paid.

I've been testing calendar spread with Copper futures from China. The prices are highly correlated as well as cointegrated. I just can't get it to profit. Any thoughts? Further, I noticed that when prices are trending up I will always loose money on one side of the pair and the winning side just isn't enough to cover the loss.

Hey James,

Not necessarily. You place bets when the price diverges from the normal value ratio, and make money when it converges back. If the frequency or magnitude of the divergences is small, then it will be hard to make money or recoup transaction costs. Some things are cointegrated, but there's too much attention on the price and there's no arbitrage left to clean up. I would experiment by running your backtest with different levels of transaction costs, and also different amounts of capital. The double edged sword is that if the deviations in price are small, you need to commit more capital to exceed transaction costs. However because you're putting more capital through the single instrument you'll be more vulnerable to slippage.

https://www.quantopian.com/lectures/introduction-to-volume-slippage-and-liquidity

you may want to focus a little more on what industry you're looking at.... a lot of suppliers which tend to have a very low stock value are going to be bigger than their manufacturers like holcim cement at 57$ per share is much larger than jacobs engineering which uses their cement... at 49$