All I'm saying is, if you don't meet the requirements for the Quantopian fund, you shouldn't expect to do well in the competition.
To use an example:
You wouldn't apply to Cambridge University without having met their entry requirements, no matter how well you have done in a single exam. If you did and you didn't get in, that wouldn't make Cambridge a poor university, that would make you an arrogant applicant.
If you were to make your algorithm hedged and continue to do as well out of sample you may be in with a shot of winning. As it is however, you didn't meet the requirements and as a result, shouldn't expect to do well.
The requirements are quite clearly laid out here:
"""""
Beta to SPY
The first change is that your algorithm is now scored according to how connected your algorithm's performance is to SPY. The lower your connection to SPY, the better. We already had 6 equal-weighted factors that generate your score; beta is now a 7th factor, all of them still equal-weighted.
Why did we do this change? If you look at a chart of SPY for February and March (the Quantopian Research notebook is attached), you could almost believe that the Quantopian Open was the driving factor in the S&P performance. On February 2, the start of the February judging period, SPY got on a rocket and headed for the stars. Grant's algo rode that rocket to the top of the charts, and he started trading real money on March 2. On March 2 the rocket ran out of fuel, and Grant's algo suffered! As we build the Quantopian hedge fund, we need to find algorithms that are uncorrelated from each other. The biggest correlation we're seeing today is around the S&P 500. This change is designed to encourage algorithms that are not correlated.
Consistency Between Paper Trading Results and Backtesting
The second change is that algorithms are now scored on how consistent they are between their paper trading returns and backtesting returns. The more consistent you are, the better. This factor is added to the calculation at the very end. After we compute what used to be referred to as your final score, we now multiply it by the consistency number, and the result is the new final score. This is applied gradually over the first few days of trading while the paper trading record is very volatile, and is fully applied at 20 days of trading.
We put in this change for a couple of reasons. The biggest reason is that we were seeing a lot of algorithms that had really good backtests that just weren't doing well in paper trading. This isn't too surprising when you think about it - if you're trying for a good score, you invest time in the backtest, and it's pretty easy to fall into data-mining, data-snooping, curve-fitting, or whatever you want to call that mistake. If you're prone to that mistake, you're not going to make an algorithm that lasts in the long run. We want to strongly encourage people to use good practices with out-of-sample data testing. If we make it very clear that a good backtest, on its own, can't win the contest, we hope to get more careful thought about how to write an algorithm that will perform well in paper trading.
The second reason is to discourage cheaters. We've seen a few instances where contest submissions are being deliberately gamed by submitting a "perfect backtest" and then a coin-flip over several entries for the paper trading. We've disqualified them, and we will continue to disqualify them in the future. The scoring change is a bit of a safety net, and a clear signal that it's not a strategy that will succeed.
For the detail-oriented: we're computing the consistency score using a kernel-density estimate using Gaussian kernels found in the Python scipy package. Both the backtest daily returns and the paper trade daily returns are each pushed through the function to fit them to a distribution separately. The difference between the areas of each of the distribution curves is used for the consistency score.
Future Changes
When we kicked off the Quantopian Open we promised that we would iterate and improve the contest. We don't think today's changes are the last word. There will be future scoring and rules changes as we think are necessary. We hope that you've found the previous discussions about scoring to be helpful; we certainly have. As always, we welcome and value your feedback about how we can make the contest better.
"""
Although I wouldn't invest in an algorithm which only made $37 off of a $10,000,000 investment, I also wouldn't invest in an algorithm which has 28% volatility. It is up to Quantopian to protect their own backs, and I completely understand the choice to only accept algorithms within a very small band. If I were taking investors' money and putting it into an algorithm made by an untested developer, I would be putting it into the algorithm which has as low volatility as possible, whilst being hedged. I definitely wouldn't put it into an algorithm which has massive 28% fluctuations...
These are my entries, they meet the requirements and have been doing well so far. They both score higher than yours and do so by doing what Quantopian has asked for.
https://www.quantopian.com/leaderboard/37/5a252290fd539e0010413317
https://www.quantopian.com/leaderboard/37/5a15936df8d24800112fe37d
My advice to you is, if you want an allocation, add another line into your code along the lines of order_target_percent(sid(8554), -0.01) in order to meet the hedged requirement.