Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Backtest in contest

I have a very different output within the contest backtest than with the backtest I have done myself. (returns droped to 4.7% annually from 26.3%). Why is this happening? How can I have do a backtest which will behave the same way in the contest?

Thanks in advance.

9 responses

Are you using default commissions and slippage in your algo? If not, that could be the discrepancy.

How I set it?
I did not set anything about the commission or the slippage.
Which commissions and slippage should I set?

Uuum, I thought that if you don't explicitly set or declare it in initialize(), it reverts to default but I could be wrong.

Anyway, as far as I know default settings are:

    set_commission(commission.PerShare(cost=0.001, min_trade_cost=0))  
    set_slippage(slippage.FixedBasisPointsSlippage())  

ok, I'll set this in the future lol that is sad, I was pround of my first realistic algo (which had a sharp of 2.3 :-(, and is now at 0.96 ....)

Then a question, when one develop an algorith, what is better: i) try to maximize the returns, ii) try to maximize the contest score? Both are very different, as if one manage to minimize the return volatility, even with a very small return, one can get a very large score.

Thanks

If you're aiming for fund allocation, then go try maximize contest score. Basically, Q is looking for low volatility algos for fund execution. Good luck!

Hi David,

If you don't have set_commission or set_slippage anywhere in your code, the backtest you ran should be using the same default cost models as what's used in the contest. Have you compared the code between the two versions to make sure they're the exact same? You can find the code that was used to generate a full backtest by navigating to the 'Activity' --> 'Code' heading on the full backtest table.

To get the code for a particular contest submission, go to the contest page and click the backtest icon next to the score plot under the relevant entry.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Jamie

So I copy pasted the code from the backtest in the contest to a new algorith. Just to be sure that it is exactly the same code. Then I run a full backtest starting from the same date, now I got the reason, grap! Now i got the same result, it was not the same dates.... sorry for the lost of time! But at least now I know about set_commission or set_slippage :-). And I also got a glitch of what overfiting means :-(.

Not specifying anything will by default lead to fixed basis slippage, standard commissions. It is not expected that user specify that explicitly, although one can do that to make sure.

@David,

Then a question, when one develop an algorith, what is better: i) try
to maximize the returns, ii) try to maximize the contest score? Both
are very different, as if one manage to minimize the return
volatility, even with a very small return, one can get a very large
score.

As James said, it depends on what your goal is, but personally I would recommend to focus on risk-adjusted returns, rather than just absolute returns. This is what the contest score does essentially. Keep in mind though that the contest score has a floor of 2% volatility (rolling 63 day std of annualized returns I believe (?)), so you will effectively get penalized if your strategy dips below 2% volatility.

Personally I don't quite understand the reason behind the 2% floor for volatility. I suppose it's there to limit the effect of new strategies 'spoofing' volatility during the backtest, but it also has the effect of 'flooring' position concentration risk as well. In my view anyway - I'm not going to increase number of positions held, if I effectively run the risk of getting penalized whenever my strategy dips below 2% volatility... Incentives, incentives, incentives! :)

As for slippage and commission, I very rarely specify these, and just rely on whatever the default one Q has deployed. They know more about what's realistic than I do. If anything, I'd use something more conservative than what the default one is.