Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Scoring Changes for the April Quantopian Open

We have two changes to the scoring for the April contest. We just posted the first leaderboard with the updated scoring system. The March contest, currently in progress, is unaffected. The changes were driven by things we observed and learned in the almost-two-months we've been running the Quantopian Open. I discussed the changes in a webinar yesterday, and am putting them down in writing below.

Beta to SPY
The first change is that your algorithm is now scored according to how connected your algorithm's performance is to SPY. The lower your connection to SPY, the better. We already had 6 equal-weighted factors that generate your score; beta is now a 7th factor, all of them still equal-weighted.

Why did we do this change? If you look at a chart of SPY for February and March (the Quantopian Research notebook is attached), you could almost believe that the Quantopian Open was the driving factor in the S&P performance. On February 2, the start of the February judging period, SPY got on a rocket and headed for the stars. Grant's algo rode that rocket to the top of the charts, and he started trading real money on March 2. On March 2 the rocket ran out of fuel, and Grant's algo suffered! As we build the Quantopian hedge fund, we need to find algorithms that are uncorrelated from each other. The biggest correlation we're seeing today is around the S&P 500. This change is designed to encourage algorithms that are not correlated.

Consistency Between Paper Trading Results and Backtesting
The second change is that algorithms are now scored on how consistent they are between their paper trading returns and backtesting returns. The more consistent you are, the better. This factor is added to the calculation at the very end. After we compute what used to be referred to as your final score, we now multiply it by the consistency number, and the result is the new final score. This is applied gradually over the first few days of trading while the paper trading record is very volatile, and is fully applied at 20 days of trading.

We put in this change for a couple of reasons. The biggest reason is that we were seeing a lot of algorithms that had really good backtests that just weren't doing well in paper trading. This isn't too surprising when you think about it - if you're trying for a good score, you invest time in the backtest, and it's pretty easy to fall into data-mining, data-snooping, curve-fitting, or whatever you want to call that mistake. If you're prone to that mistake, you're not going to make an algorithm that lasts in the long run. We want to strongly encourage people to use good practices with out-of-sample data testing. If we make it very clear that a good backtest, on its own, can't win the contest, we hope to get more careful thought about how to write an algorithm that will perform well in paper trading.

The second reason is to discourage cheaters. We've seen a few instances where contest submissions are being deliberately gamed by submitting a "perfect backtest" and then a coin-flip over several entries for the paper trading. We've disqualified them, and we will continue to disqualify them in the future. The scoring change is a bit of a safety net, and a clear signal that it's not a strategy that will succeed.

For the detail-oriented: we're computing the consistency score using a kernel-density estimate using Gaussian kernels found in the Python scipy package. Both the backtest daily returns and the paper trade daily returns are each pushed through the function to fit them to a distribution separately. The difference between the areas of each of the distribution curves is used for the consistency score.

Future Changes
When we kicked off the Quantopian Open we promised that we would iterate and improve the contest. We don't think today's changes are the last word. There will be future scoring and rules changes as we think are necessary. We hope that you've found the previous discussions about scoring to be helpful; we certainly have. As always, we welcome and value your feedback about how we can make the contest better.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

31 responses

Solid changes, the lot. The publishing of that Gaussian kernel code would be useful too; so that one could test one's own strategy, in and out of sample (manually) to see how it performs.

Unfortunately, the one aspect that has caused me to dismiss the whole project as being an exercise in futility, which remains the same, is the selection process. Of the 270 odd strategies in the Open at this point easily 260 of them will never make the grade. Only the top 10, realistically -- at any one point, will go on to vie for top slot; a single top slot. The other 260 are just fodder to make the top 10 look good. It would be a more accurate judgement were the obvious low grade strategies remove themselves from the contest.

Add to this behavior the fact that only 5 algos, of the hundreds offered, will be selected over the coming months. And the fact is that those already in the list (most likely within the top 10) will be those selected in May, June and so on. 5? Out of hundreds? Why bother competing? You'll state that any new algo can easily jump in and rise to the top within a month (or less), but the likelihood of this happening is slight. Those posted back in February have a clear lead. Note that I'm not complaining. Just pointing out an additional, potential flaw. I'd only like to see a broader, more egalitarian system in place - for experimental, philosophical purposes.

Two potential fixes to the above are,

1) Pick the top 4 each month and give each $50k. Watching more ponies in the real race would be better for your bottom line as well as the contestant's. Perhaps narrow each winner set down by half every month. A contest within a contest.

2) Clear the field on the start of every new contest. Do not carry over the pool from month to month. If you don't clear the pool, the last 2 or 3 contests are likely to be lacking any new significant contestants. Why try? The probable winners have already been cooking for half a year or so.

More winners and a fresh pool would go a long way to solicit new quants into submitting better strategies.

This of course grinds against the Q's new objectives of being a hedge fund which uses the contest to harvest likely profitable algos. One would want the longest track records to pull strategies from. But as it stands, the Open, a level contest, it is not.

My read is that a few things are at play here:

  1. Quantopian is trying to understand if the whole idea of a crowd-sourced hedge fund is gonna have legs, and how they might pull it off. They have upwards of 35,000 registered users, and have been broadcasting their platform for a number of years now to the masses, but going on its third month now, the contest participation has been relatively low. At $1M-$5M per manager, they need thousands of managers to get to the gigadollars in capital under management. So, if the contest runs for 6 months and only a few hundred individuals enter, then, "Hmm? Is this thing gonna work?" Of course, they could decide to put $100M or more to a hundred managers (still a big number), but then it wouldn't be exactly crowd-sourced, since managing $100M sounds like a full time job to me.
  2. It is a means to provide quantitative feedback to users, to give them a feel for the requirements of a hedge-fund-worthy strategy, as they develop their algos. My winning algo, as Dan alludes to, is probably a complete failure in this regard, due to its high beta. Had they had the beta metric in place earlier, I would have figured that out on my own, without their $100,000 of real money.
  3. It is an engineering prototyping effort to see how Q might manage, say, 10,000 algo submissions to their crowd-sourced hedge fund on a rolling basis, in an automated fashion. The contest gives the Q team something definite to iterate, until they are ready for prime time.
  4. Presently, it appears to be the only means to select algos/individuals as managers (unless folks are being contacted individually). So, although the number of winners is limited, I am surmising that all contest participants have a shot at getting $1M-$5M as a manager. If this is the case, Q really hasn't made it clear. I can't imagine that if someone pulls of a stellar second place, Q would just ignore the results.

Quantopian Hedgefund Outperforms Other Funds By 37%.
Wall Street Journal October 4, 2016

There may be two competing concepts here; the Open vs the Fund.

For the Open one would want fresh, new, hot, buzz, excitement. (Like the horserace metaphor one of the Q people tried to engender.)
For the Fund one would want established, stable, consistent, methodical. Nothing like one would want to hype a contest.

If the Fund is (and it appears to be so) the primary driving force, then the Open's mechanics are secondary. Get a couple of hundred of freely developed algos, grinding away for months and years, and cherry pick over time to slowly build a portfolio of strategies. But don't expect the hype of the contest to carry much further than, well, about now. "The horses are settled in for the long haul and are rounding the 12th corner. Andddd, it's the same 'ol pack in the front. I need a beer break. I'll be back in a month with an update..."

But who knows. Except Quantopian of course. They probably have metrics as to how the pace of the submissions has changed and whether or not the Open, as a true competition, is actually still viable. Or whether the current entries, or at least the top 10 of them, are "it" for the foreseeable future.

My ignorance about beta leads me to ask
If the goal of my algorithm is to consistently do better than SPY on both during a bull and bear market.
And, say, I happen to achieve that an algorithm that very consistently beats SPY by 20% over any 6 month window, but in other ways follows the ups and downs of the SPY, in general.

What would my beta look like?
Would such an algorithm be penalized with this new rule?

The new beta metric only counts for 1/7 th of the overall score, but the idea is that a combination of high alpha and low beta would be best. Ideally, for the hedge fund, I gather they want something that goes like C= C0*(1+r)^n, with C0 being the initial captial, r the rate of return, and n being the compounding factor, so that their investors are basically putting money into zero-risk a high-yield bank CD. If they get enough uncorrelated strategies, the hope is that they can construct such an investment, with a high return. So, if your algo tracks the market, along with everyone else, then Q will have a challenge in getting C= C0*(1+r)^n behavior. At least this is my sketchy assessment at this point.

Per http://www.forbes.com/2007/11/05/risk-alpha-beta-pf-education-in_rl_11050investopedia_inl.html:

A beta of 1.0 indicates that the investment’s price will move in lock-step with the market. A beta of less than 1 indicates the investment will be less volatile than the market, and correspondingly, a beta of more than 1 indicates the investment’s price will be more volatile than the market. For example, if a fund portfolio’s beta is 1.2, it’s theoretically 20% more volatile than the market.

So, indeed, it seems that you could better the market by 20% over any six-month window, but have a beta of 1.0, for which you would be penalized in the contest. Your excess return would be captured in alpha.

Incidentally, in the Open scoring changes yesterday for April, taking a look at the top 20 only, 11 of the 18 ranked there were bumped out, 7 remained and 7 new folks appeared, while overall scores went down, ballpark ~3% to ~10%. I'm a fan of the honing/evolution of the process (not just because my rank went up by 7 as that's just one day, the Contest/Open is helping require rolling up my sleeves on certain aspects I might have overlooked).

Changes seem good to me. I'd just like to point out that many of the (original) stats are very colinear/correlated, and importantly, the Calmar and Sharpe ratios have the volatility/max-drawdown as a denominator. This benefits algorithms with ultra-low volatility and drawdown, where you get points from quantile rankings of features which are calculated from a stat in a denominator.

I just watched your webinar, you wondered why there are few algorithms using fundamental data; I considered it, but it seemed unlikely that any algorithm which relied on fundamental data could win the contest, given how points appear to be calculated, much for the same reasons as above.

It might be worthwhile to think up some novel performance measures which are not inter-related, equity slope consistency, conditional beta, skew, that sort of thing?

Thanks all for the feedback

The contest and the fund: It's very important that the contest and the fund be aligned. We can't on the one hand say that we want algos of type X, and give financial rewards to people who create algos of type Y. That's a misalignment of incentives that I'd very much like to avoid.

I'm interested in the perception that someone might think "I can't win" and decide not to enter. I think it's not wise conclusion.

  • Looking at the top 10 for March, 4 of them are new entries for March and 6 were entries for January. That fraction runs all the way through the top 50 at least. In other words: a new entry is quite competitive with a well-aged entry.
  • We're going to need dozens of algorithms in the hedge fund, far more than there are winners of the Quantopian Open. The hedge fund will, presumably, be significantly more lucrative than the Quantopian Open (algo writers won't get 100% of the profit, but they'll be managing far larger sums of money).

Still, perception is reality, so if people perceive the contest as unwinnable then entries will indeed be deterred. We're thinking about other prizes that make sense for people entering. I'm interested in other ideas that might help. I hear the suggestion of "clearing the boards" but that would be contrary to the fund goals, so we're unlikely to choose that option.

As for adding other performance measures: we very well might! For now, the biggest change we want to see in the pool of entries is less beta correlation. When we look at how the pool evolves we may find a new correlation that needs to be corrected. We'll see what the future holds.

Thought I'd chime in on the topic of potential entries/perception of inevitable defeat, and at least give my own reason for not entering the contest (I hope this isn't too off-topic) despite fully supporting this idea of a crowd-sourced hedge fund and understanding Qs need to test things out. As a disclaimer, I've only been using your platform for a couple of months and just plugged in my first algo to paper-trade (IB) yesterday (fingers crossed), so I'm a Q-newbie (Qewbie? Qbie?).

I initially intended to enter the Open. I think it's a great way for Q to get a feel for how things are operating, and it's a great way for me to see how my algorithm performs relative to others' (I'm especially curious to see the correlation statistic). But my algo trades rather frequently, and the 0.03 cent transaction costs simply aren't reasonable for it. This may seem like a small issue, but that's 4x the IB fixed pricing structure of 0.0075/share. Incorporating at least this standard rate, if not the volume-based pricing structure they have, may give algo writers more freedom, and thus give you the diverse pool of algos you're looking for.

An interesting thing to see would be if you have any sort of bias in the Open algos because very-short holding period algos are so heavily penalized. Who knows, I might still submit an algo, if just to get my hands on that correlation score, but I would likely have to force it to trade less to make it a contender.

Picture months from now, July 4 say, just one more winner to be picked August 1st. For the sake of argument we'll assume that the pool of strategies in the Open has grown 5-10% per month. There are now 450 strategies running. The Q has peeled off the top 5 best strats over the last 6 months and fate has dealt their cards, such as they are. Then along comes Byron, a hot quant from Georgia Tech with a serpentine sense of python, who manages to slither out a damn decent strategy. But...

There are 30 algos now crowded around the 90's mark; 85-95, and the market is convulsing. The VIX is over 30 and the leaderboard, at least the top 5% of them or so, are writhing back and forth in their strive to the top. These top strategies, however, have been in the running for months and months. How can our boy Byron ever compete? Would he even want to? As it stands, I cannot see that he would.

So what would make him want to?

A chance. Isn't that what every quant here is yearning for? Every strategy so far submitted, those at least that not born of connivance or chicanery, was done with a touch of hope. So how to foment such hope in Byron? He has to know his efforts at quintessential quantification have at least a chance at being selected to participate in the financial fantasy that Quantopian is offering. How can his strategy rank? As I see it the primary way would be to gate the fresh horses at the same starting post. And then to cream the crop by skimming the top X% into their own pool. That's right. Take the top fraction of algorithms from every fresh race and create their own fund; portioned by position at the finish.

Our player Byron would see that he does indeed have a chance at earning a top slot in a considerably more equitable contest than what he will witness come August.

As Quantopian's vetting and judgement mechanism matures wouldn't it be prudent to expand the selected number and therefore variety of strategies? Listen to Fawce and his quotes of Markowitz; diversification is your goal.

Good luck Byron.

Well, it's just freakin' a lot of work! And now that Q wants algos that are orthogonal to the market and economic winds, it's even more work! Aside from knowing a little Python and getting up the learning curve on the platform (frankly, I still stumble), there's identifying a strategy, developing it, getting it to work well over the past two years in a backtest (ideally without bias), and then deploying it to simulated live trading without it crashing, all while while conforming to the various contest rules. And even if a contestant wins, there's no immediate reward, but more of a coin toss in six months to determine if there will be a payout. So, I can see how it the Open might be a bit daunting and not worth the effort for a guy who stumbles across it on the web.

Regarding needing dozens of algos for the hedge fund, it'll be more than that, if we are talking $1M-$5M per algo. Even 25 algos at $2.5M each is only $62.5M in capital deployed, which doesn't sound like enough to make it a business for Q.

Hey Grant,

I'm one of those guys off the web you speak of above that has stumbled across this contest. I sent you a message on Facebook, would love a response.

Antoine: Thanks, that's good feedback. Yes, we need to revise the sliippage and commissions.

Market: Yes, that chance to win is definitely one of the motivators. And yes, we are definitely going to be skimming off the top X% and putting them into the fund! It's going to be more than one per month, for sure.

Grant: For sure. That work hurdle is a hard one. We're going to keep improving the platform and try to make it easier, but there's some level of effort that is going to be a requirement.

Generally: We're really pleased with how the contest is going, and we're not going to stop in July. We're going to run the contest indefinitely. I can't commit to forever, of course, but we are going to keep running monthly contests for as long as I can see.

Hi Dan,

One thing that may not be clear to folks trying to sort out if the Open is worth the effort, is that they'd also somehow be competing for an opportunity for a spot in the hedge fund. When I go to https://www.quantopian.com/open and search on 'hedge' it doesn't say anything to the effect of "we are definitely going to be skimming off the top X% and putting them into the fund" as you say above. If all it takes is a two-year backtest and one month of simulated live trading to be considered, then it's a great opportunity that you need to highlight, especially if they'd have a shot at $1M-$5M in capital as a manager (and you split a traditional 2/20 fee structure 50/50).

Grant

I absolutely agree with Grant on this. I figured that Open winners would also be considered/looked at closely for the fund, but not necessarily the runners-up or top X%. Definitely something to highlight if that's what Q is planning. Even without binding yourself to a fee structure/capital amount/or X% value, I'd likely reconsider the Open knowing that you're at least looking into the top tranche of algos and not just the winners.

Cheers,
Antoine

And in a separate forum thread, Jess Stauth had alluded to potentially another avenue to getting into the fund, with paper trading results only. Was she referring to the Open? Or will there be something else announced? It seems that the real opportunity is that there will be N fund slots this year, 10*N to 100*N next year, and so forth, with $1M-$5M of capital per slot, so when I land on quantopian.com, it should say "You've hit the jackpot! Submit your algo, and have a shot at $1M-$5M in capital!" rather than "Your..." blah, blah, blah. Oh, just another trading API. No big deal. Move on. Frankly, I think you have something to sell, and you're just not selling it.

Grant, I think you are spot on that our messaging on our homepage and in other places doesn't effectively convey the developments in our business over the past 1 year. That is something we are working on. We need to communicate the value of the opportunity we're presenting to anyone, anywhere. It's like you were sitting in our product planning meeting yesterday :)

In terms of Jess' allusion to using paper trading, we as a company had previously said that to be eligible for the fund, you'd need a real money track record. We no longer think that's the case and the Open is the most obvious way for a user and algo to develop an out of sample track record through paper trading. But really, you can pursue other methods as well, paper (or real money) trading on Quantopian outside the contest. We'll be able to evaluate those algorithms as well. (unlike those quants who approach us to use their track records off of the platform)

Hope that helps,
Josh

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Are there timelines set in place regarding establishing formal incentives for algo owners who get accepted as a manager? I've been tracking this thread, and it seems like a lot of the future financial gain of the algo owner is up in the air. I've read the manager page, and have observed a lot of "up to x% verbiage that does not resonate well with me, and most likely other potential algo owners as well.

My case specifically, I have an algorithm that does well in Forex (written in Java). If I were to transfer over to Python and stock trading, it would take some time, so I was wondering what kind of financial opportunity would exist in the future if the hedge fund is successful.

Resonating with Michael's reply above, one way to get more people to spend the time to write algos for the contest/fund is to have clear requirements and rewards. If I do X, I'll get Y, with probability P.

@ Josh,

But really, you can pursue other methods as well, paper (or real money) trading on Quantopian outside the contest. We'll be able to evaluate those algorithms as well.

What does this mean, in practice? Do you mean at some future date, users will be able to submit algorithms through a channel other than the Open? Or are you automatically evaluating all algos, live trading on Quantopian, and will contact folks with algos that have fund-worthy attributes and performance? Any advantage to a real-money track record? If so, how much? And what will be the feedback? A simple thumbs up/thumbs down, or something more like the ranking against metrics that you apply to the Open contestants?

In practice, I don't think we have it worked out. But the baseline observation is that an algo that has been paper trading is generating much the same data we'd need to evaluate it as the contest algos.

So in the future, if and when we have a process for evaluating non-contest algorithms for the fund, we would have the ability to evaluate your work.

For example, the 3 entry limit doesn't mean we expect someone to absolutely only have 3 algos worthy of consideration.
Or a winner like yourself shouldn't feel that you only have 1 algo that will be eligible now.

The Quantopian vision clouds as it ages. There appear to be numerous and varied intents stated in the last six months with no clear story to follow. Now, of course the Q is not beholden to anyone but their VC, but some clarity of offering might be due here soon. No doubt you're working just that for release soon; a revised and precise Fund qualification structure with the Open's involvement clearly define along with statements addressing the lowering of barriers to the monetarily challenged via brokerage offerings and tool expansion.

"Nice to be the boys with the best toys." The Q has an enviable position.

Our vision has evolved as we have evolved as a company.

We are, and have always been, laser-focused on what we want to achieve. The fact that we don't know all the details of how we're going to get there doesn't mean that our vision is "cloudy," it means that we are doing something new, something that no one else has done before, and we are not so arrogant as to believe that we know in advance the exact path we will take to get there.

We have, over our entire history, been as open and transparent as possible about what we know and what we don't. We've answered everyone's questions to the best of our ability. When we don't know an answer, we say so, as illustrated by Josh's comment above just an hour ago.

Our story is clear.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I must apologize; every other Saturday (well, without the markets on it felt like Saturday) I get up and think to myself "How can I ruffle Jonathan's feathers." It's just one of my habits at this point in time, like sweet rich coffee and a banana.

I would be interested into how the 7 factors have been chosen and why they have equal weights. It would seem to me that there might be a market for various different types of algorithms and therefore for different categories in the competition. For example, some investors have a higher risk profile and would therefore be happy to have more volatility if it ultimately meant higher returns. Am I completely misunderstanding the ultimate business model?

The crowd- funded hedge funds seems like a good idea on paper, but not sure how it will be executed . Why not turn Quantopian as "THE" marketplace for advanced algos, where anyone with an IB or E Trade * account can sign up , look at a list of algos ( approved by Quantopian) developed by the community and classified by risk tolerance, objective, etc , and link their account to it.

For example , @Market Tech has an algo that I would like to trade with my own account, he could charge each user $50 a month to have access to the algo's signal. Quantopian in turn could take a % of that fee for themselves. The owner of the algo makes money, Quantopian makes money, and I believe total AUM would be greater. This model is being used by collective2. Just my 2 cents....

Cheers

If the intention of the new consistency score is to discourage curve-fitted, "perfect", gamed backtest returns as explained in the original post, why should the consistency score also punish algos whose real world paper-trading performance is better than their backtest performance? Why not just reduce the weight that the backtest score counts towards the final score? To what end would it serve to penalize algos whose actual trading performance exceeds their backtest performance?

Mike T
If your trading performance for a short 30 day period is radically different than your 2 year backtest its a strong indication that your algorithm is showing anomalous returns that aren't representative of its long-term potential. Since long-term potential is what's important, it doesn't make sense to take one month's out of the norm performance and multiply it by 12 to get the algorithm's long-term performance.

Kevin: I understand and agree with what you are saying about using 30 days of trading to judge long-term potential, but along those same lines, backtesting results (especially just two year ones) are not representative of the future long-term potential of an algorithm either. The 2013-2015 market is completely different than the 2010-2012 market as I'm sure 2015-2017 market will be completely different also. The current month's "out of the norm" performance might actually be the new norm going forward.

In my own personal experience, I've never seen backtesting results translate that effectively to live trading. So many strategies and systems have backtests that look amazing, yet when they are actually turned on and traded live in the current market, its very rare that any of them actually match their backtesting performance. The fact that none of the algos in the competition have currently have both a top 20 paper trading and backtesting score somewhat speaks to that. Most of the top algos either have great backtests with mediocre live trading results or mediocre backtests with great live trading results. Just my 2 cents.

There are many ways to game this competition. These manipulations would not create successful long-term automated algorithms. The biggest problem, in my opinion, is the short-time frames being used. Consider this method:

1) Use a stock scanner to find stocks with a low beta
2) Write a simple algorithm to buy/sell based on technical indicators (MACD/RSI/etc)
3) Select stocks with the best back-test results, tweaking the algorithm to achieve the best entry and exits
4) Manually review each stock and look for potential technical analysis formations (descending triangle, head-and-shoulders, etc)
5) Submit the algorithm when you expect a break-out to occur
6) Re-submit the algorithm each day a break-out doesn't occur, using new stocks with potential formations
7) When a breakout happens, you will have a high scoring algorithm with amazing back-test and paper trading results.

The problem is that steps 4-6 are using information that is NOT part of the algorithm. This bias will reward users who can guess a short-term move based on a manual review of the chart.

I think the most important fix to judging algorithms is to extend the time that they are paper trading. This could be either a hard requirement (i.e. "algos must paper trade for 6-months before entering") or perhaps some method to reward longer paper trading.

Quantopian's investors may not have this much patience, but I think it is necessary to identify high-quality algos that would be successful long-term. Quantopian users may not like this change, but I believe the reward of potentially being a hedge fund manager is worth waiting 6-months time.