Can't help but notice one Charles Brown topping the leaderboard with three algorithms. Curiously, all three have exactly, and I mean exactly, the same backtest stats: 87.23% annual returns, 8.576% volatility, 9.943 Sharpe.... you get the idea that the clearly had to all be the exact same algorithm during the entire 2 year backtest. Then, they go live and suddenly they all three start to have widely diverging performance, one with a 552.4% annual return, the second with an almost exactly opposite -567.6% annual return.
If I was going to game the system and like Charles didn't care how obvious it was that I was doing so, I'd overfit a backtest strategy to maximize all the criteria being graded and enter just a few days before the contest started. That would ensure I maximized at least 50% of my score since the first contest is going to have only 30 days of paper trading and will therefore use 30 days of backtest. The contest developers assumed that such a strategy wouldn't work because an overfit algorithm would likely under perform dramatically in the paper trading. Charles is brighter then that, though, so he codes his algorithm so that it changes into a completely different algorithm the day the contest starts. This can easily be done in hard coding, or even more subtly by using the csv upload feature to dynamically change the algorithm to anything at any time. I'm guessing Charlie wasn't one for subtlety and just hard coded it because...it then appears that he takes the strategy of moral hazard familiar to hedge fund managers with a down portfolio the world over. He codes in two opposite strategies with maximum volatility. One of them will tank horribly, the other one will be a big winner, by definition. Since its not his money on the line and he gets 3 algorithms he doesn't care about the loser and can cherry pick the winner.
The numbers on Mr. Brown's leaderboard make it obvious that he's doing this. The more disturbing idea is that Charlie is simply not very bright and made this so obvious that there is no other explanation for what he is doing. A more subtle gamer would have put together 3 different overfit scenarios for the backfit so the backtests didn't all show exactly the same numbers. He wouldn't chose super high volatility strategies with nearly exactly opposite results for his A and B tests in paper trading. He would use approximately the same stocks in the backtest as the paper trading, maybe even taper the backtest strategy over time in paper trading so it was't so obvious that coded a strategy change. Any number of similar subtleties would make it impossible to detect the gaming but allow an algorithm through that was essentially random when it came to trading live money.
In the end as long as algorithms are allowed to use backtesting as part of their results and no referee is allowed to see algorithms, this contest is subject to undetectable gaming. We would think Quantopian would want the same algorithm to run in paper trading as was running in backtest, but at this point there is no way for them to verify that and every incentive for contestants to manipulate their code so this isn't the case. At the very least Quantopian should have a contest where algorithms with 60 days plus are only competing against other paper traded only algorithms. We can all thank Mr. Brown for showing us how easy it is to game the contest and for being so obvious in how he did so. Its up to Quantopian to restore faith in the contest.