Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
New member with questions about contest rules

Hello,

I am a fairly new member. I have algorithms that are basically ready to test, but I am still in the process of learning the Quantopian IDE environment.

I am interested in the competition, but I'm a little confused about the judgement process. From reading various posts on past competitions, it seems like the goalposts are constantly moving, so as a newcomer I am not really sure what the contest is looking for. I understand that updating rules to accommodate changing conditions makes sense. I'm not questioning that. I'm just interested in learning what the goals currently are, so I can tailor my algorithms appropriately.

I'm especially interested in any qualitative rules that are enforced, and known by the community, but not necessarily formally written down. For instance, I see that in the recent competition, the "winning" algorithm was DQ-ed because of a historical out of sample test, but I don't see such conditions in any of the rules I've checked.

I ask about this in particular because tailoring an algorithm to perform well over a 10+ year out-of-sample back test, while also performing well under out-of-sample "live" conditions is kind of serving two masters at once. Certainly some approaches can do well under both scenarios. Suppose, though, I have a particular algorithm that I believe will work well now, given something I've observed, but I know would not perform well in the past because of something else, is such an algorithm just not relevant to the competition?

Another topic is the judging criteria. Are the quantified measurements of performance equally weighted? Does anyone go through them to check for well-performing algorithms that don't "excel" based on the risk measurements but perform well? I.e. are the contest-entrants slaves to the metrics, or is there some sort of procedure to check for good return profiles that aren't captured very well by the judging criteria risk metrics?

I know I'm getting into some fine print here, but I don't want to waste anybody's time (mine in learning the IDE, judges in reviewing performance) going forward.

Thanks,
Alexander

Edited for clarity

6 responses

Well here is the offical rules page https://www.quantopian.com/open/rules.

Edited original post for clarity.

More clarity:

  • Is there equal weighting to each quantifiable metric when comparing entries? (I.e. are returns = beta = sharpe = drawdown, etc, or are returns weighted most, etc.)
  • What are the qualitative steps in the judgment process?
  • Should we expect out-of-sample historical tests? I.e. Should we optimize over a decade of data as well as focus on making something that works going forward?

Hello Alexander,

It's been confusing, for sure. However, the basic idea is that Quantopian is in the process of building a hedge fund, https://www.quantopian.com/fund. So, over the past year or so, there's been a sausage-making development process that has played out as they've learned how to get what they need from the crowd of users participating in the contest and how to manage winners' algos trading real money.

A few changes I'd anticipate:

  • The contest now has a 6-month out-of-sample paper trading period. Q will need to sort out how to weight the out-of-sample results against the 2-year backtest. Perhaps the pyfolio tool will be formally incorporated into the contest (https://www.quantopian.com/posts/new-feature-comprehensive-backtest-analysis)?
  • Some encouragement/incentive for contestants to look back further than 2 years might be implemented. I think the problem is that some algos might not be back-testable beyond 2 years (e.g. for a fixed, small stock universe), so it is not clear how this would work as a rule. However, it is now clear that if your algo stinks going back 10 years, it'll be rejected (although if you can explain recent good performance, then maybe not).
  • Note that on https://www.quantopian.com/fund, "Allocations will range between $1 and $25 million per algorithm." so the $100K level for the contest is 1-2 orders of magnitude off. One might expect that Q would start looking at capacity, as well (although my sense is that it gets tricky, since each algo might require a custom slippage model to be developed based on the details of the algo).

Reading between the lines, Q has an opportunity to line up some initial funding (not from institutions, but from private investors, most likely). To get it, they need to put together a sales pitch, showing that they've done their due diligence. So, the more you can do to arm them with evidence that your algo is the super-duper, the better your chances of getting money from their funders--there are no rules in capitalism. I would think of the contest as a first cut. If you submit an algo that does well in the contest, then you might have a fundable algo, but you'll need to provide more support (even sending them the code).

The other note is that although Fawce tries to claim otherwise on https://www.quantopian.com/posts/full-winners-returns-data-now-available, the contest winners are the fund at this point, and will continue to be part of the fund, at a seed capital level. Q wants the real-money trading results so that they can use them in their sales pitch to investors. So, if you do win, you are best to have predictable returns for 6 months that will give investors a warm fuzzy to ramp up to $1M or more in capital. If your real-money returns suddenly jump up then caveat emptor will start flashing in their minds, if no explanation is provided.

Grant

Hello Alexander, welcome to Quantopian. I'll try to add to the previous answers.

The contest rules have evolved over the last 10 months, and I know that has caused some confusion. The good news is that every contest started and finished with the same rules. We didn't start a contest and then retroactively change the rules. (The exceptions to that are things that changed to the winner's benefit, like when we started paying winners monthly instead of after 6 months). So, what you see is what you get, month-to-month.

Also, the pace of the rules changes have slowed down. We announced significant changes in May and June, and the 6-month duration change in July. August and September didn't have significant announcements, and I don't forsee any major ones this month. If there are any changes for the next contest (as I said, only minor ones anticipated) I will make every effort to announce them next week.

As for the disqualification this month, I'm hopeful that the 6 months of out-of-sample testing will make that a very rare occurrence in the future. The problem with the 1-month contest is that it was just too easy to be lucky and grab the win. But when one looked at the algorithm from a broader time frame, it was doubtful that the results were repeatable. That was what forced the disqualification. Going forward, it is much harder to have 6 lucky months in a row. I expect the winners of the 6-month contest aren't going to have large discrepancies in performance for in- and out-of-sample. There will be a lot of entries that have differences, but they won't be the winners.

If you have a particular entry you're looking at getting feedback on, please run the backtest, create a tearsheet, and then share the tearsheet in the community asking for feedback. That tearsheet doesn't include your source code and we can review the results. Tearsheets have the option of suppressing positions, too, if you want to hide what you're trading in. You can also email us privately at [email protected] if you don't want to share the tearseet.

Some of the other questions you asked have different answers for judging the contest v. for analysis for the fund. Within the contest we are constrained by the rules we create - we can't change them retroactively, and we hew to them closely. We have to pick a winner every month, on the calendar. With the fund, we have discretion, we learn faster, and we can pick winners when they are seasoned, regardless of the calendar. With the fund we're not using a strict calculation; we're using some broad metrics to screen algorithms, waiting for the out-of-sample data come in, and then evaluating each algorithm on its own merits.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Dan,

You speak of the fund in the present tense, is there actually a fund up and running? Why would you use different metrics to select investments for the fund and for the contest? How have you tested these different metrics? Curious and anxious to hear about the actual Quantopian crowd-sourced hedge fund.

Thanks, Dan. That's just what I was looking for.