Our next contest kicks off on Wednesday morning at 9:30AM EDT, and it's time to update the rules.
People will remember that the significant rule change we made last month was to put in some strict limits on your algorithm's correlation to the market. This had a positive affect on the quality of algorithms that were submitted to the contest. We found a good number of promising algos for use in the hedge fund. Still, we found a couple of problems common to many of the algorithms that we want to score more accurately.
The first problem is excessively overfit backtests. When you see a Sharpe ratio over 4 (let alone 15 or 30!) in backtesting, you look at it with a skeptical eye; when that same algorithm has a terrible paper trading performance, your skepticism is vindicated. The scoring system catches up with these overfitted algorithms eventually. But still, it's not a good thing. People are spending too much time trying to maximize their backtest returns, and not enough time making a good algorithm that will likely perform consistently in the future. And, the leaderboard early in the month is pretty cluttered with algorithms that really aren't going to make it.
First big rule change: No more backtest score component in your overall score. Previously, your overall score was a blend of the paper trading and backtest score, multiplied by the consistency score. In July and going forward, your score will be just the paper trading score multiplied by the consistency score. That's going to remove any scoring incentive to maximize backtest results to unrealistic levels. The incentive should hopefully transfer to a) making an algorithm that performs well into the future and b) making the algorithm perform consistently over long periods of time. The downside to this change is that the leaderboard will have more volatility day-to-day, particularly early in the month, but even through several months in some cases.
Second big rule change: Your algorithm must be hedged. When we added the beta filter last month, it had the effect of removing most long-only strategies from contention. There are still too many algorithms on the leaderboard that are long-only; they are market-timing or momentum strategies. These algos tend to focus on a single stock and go in-and-out of that stock according to some signal. From our perspective, those algorithms have too much market risk and too much concentration risk. They have low beta because of their particular buy-and-sell patterns, but they are still susceptible to market movements or to bad news about a single company. Instead, what we'd much rather see is some sort of pair, or better, a long and short basket strategy. If you're using some signal to decide when to go long, we'd like you to use the same signal to find something to go short, and vice versa. We want you to hedge your market risk by always being appropriately long and short. As a practical matter, the scoring system will check your positions at the end of every day to verify that you're hedged with longs and shorts (or entirely in cash).
A couple other minor changes:
- Your algorithm's backtest must make money. This is to eliminate algorithms that we'd simply never use, but managed to get lucky in paper trading.
- Your algorithm must make trades in paper trading. This has always been true in practice, but the other rules changes made it a little more possible for a no-trade algorithm to win. Going forward, no-trade algorithms cannot win.
Looking forward
I'd like to apologize for the lateness of this post. I would prefer to have put this up 10 or 14 days earlier, but we've been working down to the wire testing these and other possible improvements to the rules.
One of the rules changes that didn't make the cut this time is a change to the contest duration. We want to make the contest period longer. On one hand we have overfitting of the backtest; on the other hand we have volatile paper trading results. The only way we see to improve the quality within those constraints is to make the contest longer. We haven't decided how to implement that yet. One possibility is to create an additional contest with a 3- or 6-month contest period, move the $100,000 prize to that longer contest, and give a smaller prize for 1-month contest periods.
As you write your algorithms, you should keep your eye on the long-term. The algorithm you submit today is competing in a one-month sprint, but it's also going to be competing in a marathon. They share the same starting line, but the finish lines are different.
Bad Prints
There have been some questions about algorithms that execute trades because a bad price print was received by our trading system. Our data vendor, like all data vendors, sometimes passes us bad data. Those bad prints might cause an algorithm to place a trade that it shouldn't, or skip a trade that it would otherwise have made. When this happens, the algorithm will be scored exactly as it traded. Even if the data is corrected later, there is no correction of the trade. We think of this as modeling the real world; you don't get to un-do a trade just because you got bad information. If your algorithm is sensitive to price jumps, you might want to add some logic to protect yourself from bad prints.
The one notable exception to this rule is that if your algorithm didn't actually make a trade, but is suffering on the leaderboard because of a pricing anomaly, please bring it to our attention so that we can correct it. For instance, if we get a bad print that says that Apple dropped to $1, and it looks like your algorithm suffered a 99% drawdown but immediately recovered, that is a correctable issue. The drawdown was fictional, the portfolio didn't change, and that type of pricing error can be fixed.
As always, feedback is welcome. The community has shaped this contest's evolution, and we appreciate the ideas and advice we receive.