If I understand the contest rules correctly, the winner is judged solely on his 6 months of live trading performance, and not on his backtest. I haven't kept up with the details...is this correct? From the rules, we have "Just like our fund selection method, the Participant's algorithm will be judged on a combination of backtest performance and paper trading performance" but I don't see how the combination is done. In fact, it says "We will calculate an overall rank by averaging the Participant's rank in each criterion in paper trading." So, is the backtest considered at all?
The simple answer to the question may be that if the 2-year backtest results are not used to determine the current rank, then the ranking will be a roll of the dice.
But then if the backtest results are used, one runs into the "over-fitting" problem:
https://www.quantopian.com/posts/q-paper-all-that-glitters-is-not-gold-comparing-backtest-and-out-of-sample-performance-on-a-large-cohort-of-trading-algorithms
In the end, Quantopian should now have some data indicating whether the contest, as constructed, is useful. Last I heard, they'd funded 17 algorithms (with their own seed money) and are working to scale up to take on external capital. That's a decent sample. So did any of those algos come from contest entries? How did they rank? What was the relationship between the backtest and the live trading results? How are things faring with real money? Personally, I'd be more interested in ranking my algos against the current Q fund algos, than the contest entries.
I think if Quantopian knew how to align the contest rules with the Q fund, they'd have done it by now. In fact, they tried, and it didn't work out so well. It used to be that the winner got $100K in seed money. They've switched to a less transparent, but presumably better process for seeding fund algos. As the contest stands now, it is kind of a parallel promotional/motivational effort, with an unknown relationship to the actual Q fund algo selection process.