Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Any way to automate backtests? (Looking for bias)

Lets say I have an algorithm which is simply choosing to be in or out of the market with a 1 month holding window. I run this algorithm at the start of each month. Algorithm works great. Sharpe > 1. etc.

To test the robustness of the algorithm, I'd like to know that it performs reasonably well no matter what day of the month it starts on. One way I could do this would be to run a backtest which makes trading decisions on different trading day of the month (say offsets of 1-20). The robustness of my algorithm would be verified if the results are consistent no matter which day of the month it runs on.

Other examples might be varying algorithm start date across months (e.g. run algorithms on 1 year windows of data and then test across different start months to make sure you didn't choose the ideal start date). You could also use something like this for parameter optimization.

Is there any way to do something like this?

6 responses

For the trading day, you can use routines from http://docs.scipy.org/doc/numpy/reference/routines.random.html to generate random values. Then, it is just a matter of running your code repeatedly and collecting up the results (note that you can use get_backtest to pull results into the research platform). I think that if you write your settings to record they'll be available in the research platform.

Regarding varying start dates, I think it needs to be done manually. I don't know of a programmatic way of doing it (unless you use the research platform, where it should be possible).

John, if i read your question correctly, you have a max of 31 trials you need to complete? do it by hand.

For this particular example, Toan, you're right, I can. Running this manually 20-30 times isn't the end of the world. However, my point was that this problem would present itself in other ways.

However, in general I find that the longer the holding period strategies employ, the more sensitive results are to starting point. Most directional strategies (with multi-week or longer holding periods) look great if you start in '09 because most everything went straight up and most things look bad if you start in '07 for similar reasons. A way to protect analysis from this kind of bias would be to run the algorithm for some window of time (say, 5 years) repeatedly starting in each month between '06 and '11. The distribution of performances might give you a better indication of how well your algorithm might perform over 5 year periods in the future (and likely give you a pretty good taste for best and worst case scenarios).

This would be 60 runs. Plus, anytime you modify the algorithm, you'd need to redo it.

If you have any idea on how to run a backtest from the research API, that would be very helpful. To date, I'm only aware of how to pull results.

You can do it in the research environment similar to what was done here: https://www.quantopian.com/posts/sensitivity-analysis-aka-parameter-optimization-of-pair-trade-input-parameters

It even includes heatmaps for your viewing pleasure ;)

you can use mechanize, or selenium (both are python packages). would take you about a week to learn but then you will know how to automate web testing.

Mohammad, that's a good reference, thanks! I've bookmarked it to read it though later.

Thanks. The notebook examples are what I was looking for.