Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Is the contest being gamed?

Can't help but notice one Charles Brown topping the leaderboard with three algorithms. Curiously, all three have exactly, and I mean exactly, the same backtest stats: 87.23% annual returns, 8.576% volatility, 9.943 Sharpe.... you get the idea that the clearly had to all be the exact same algorithm during the entire 2 year backtest. Then, they go live and suddenly they all three start to have widely diverging performance, one with a 552.4% annual return, the second with an almost exactly opposite -567.6% annual return.
If I was going to game the system and like Charles didn't care how obvious it was that I was doing so, I'd overfit a backtest strategy to maximize all the criteria being graded and enter just a few days before the contest started. That would ensure I maximized at least 50% of my score since the first contest is going to have only 30 days of paper trading and will therefore use 30 days of backtest. The contest developers assumed that such a strategy wouldn't work because an overfit algorithm would likely under perform dramatically in the paper trading. Charles is brighter then that, though, so he codes his algorithm so that it changes into a completely different algorithm the day the contest starts. This can easily be done in hard coding, or even more subtly by using the csv upload feature to dynamically change the algorithm to anything at any time. I'm guessing Charlie wasn't one for subtlety and just hard coded it because...it then appears that he takes the strategy of moral hazard familiar to hedge fund managers with a down portfolio the world over. He codes in two opposite strategies with maximum volatility. One of them will tank horribly, the other one will be a big winner, by definition. Since its not his money on the line and he gets 3 algorithms he doesn't care about the loser and can cherry pick the winner.
The numbers on Mr. Brown's leaderboard make it obvious that he's doing this. The more disturbing idea is that Charlie is simply not very bright and made this so obvious that there is no other explanation for what he is doing. A more subtle gamer would have put together 3 different overfit scenarios for the backfit so the backtests didn't all show exactly the same numbers. He wouldn't chose super high volatility strategies with nearly exactly opposite results for his A and B tests in paper trading. He would use approximately the same stocks in the backtest as the paper trading, maybe even taper the backtest strategy over time in paper trading so it was't so obvious that coded a strategy change. Any number of similar subtleties would make it impossible to detect the gaming but allow an algorithm through that was essentially random when it came to trading live money.
In the end as long as algorithms are allowed to use backtesting as part of their results and no referee is allowed to see algorithms, this contest is subject to undetectable gaming. We would think Quantopian would want the same algorithm to run in paper trading as was running in backtest, but at this point there is no way for them to verify that and every incentive for contestants to manipulate their code so this isn't the case. At the very least Quantopian should have a contest where algorithms with 60 days plus are only competing against other paper traded only algorithms. We can all thank Mr. Brown for showing us how easy it is to game the contest and for being so obvious in how he did so. Its up to Quantopian to restore faith in the contest.

110 responses

I may have a dog in this fight, but I think there's a number of solutions for this:

Make the backtest a threshold instead of an average. i.e. if you take the highest 40 sharpes or some risk ratio and let them compete in paper trading and limit one per entrant. I do not think you would want to allow one person to do opposite things and win. I am not sure why this should be in #5: https://www.quantopian.com/leaderboard/54cd7a79db7a073288000459

Alternatively, quantopian can always asks for a buy in that is first loss on the algorithm and actually live trade with 10k.

Thanks for your posting, glad you are interested in the contest. I think some patience is called for while we see how the contest plays out. We included the paper trading component in the judging, and that paper trading component has only begun to affect the scores. I'm not one to jump to conclusions, especially conclusions based on a single day of trading data.

Remember that high volatility is punished by the scoring system. Also remember that the weight of the live trading score increases every day, and there are 19 more trading days before we declare a winner. It's pretty easy for a strategy like the one you describe to be in first place on the first day, but it's another thing entirely to hold onto that. I wrote about this in some more detail last week ()

What we hope will happen is that the cream will rise to the top. If, as time progresses, we find that the contest structure isn't working well, we'll change it. As we've said before, we will adjust and improve the contest iteratively. We're committed to doing at least 6 of them. We will learn and iterate as we go along.

One of the ideas that we considered but didn't include in the scoring was a consistency score. If the paper test and backtest score are particularly divergent, then the total score gets penalized. If we're unhappy with the current scoring system, that will be be one of the ideas that we revisit for the next version.

All that said, thanks for your interest. Candidly, I'm delighted that people are so excited about the contest! We're working to make it the best experience that we can - and that includes it being a fair contest.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

OK, you seem unimpressed with that clear demonstration of gaming, how about this one.

If you download the leaderboard data you will see that the following users all have identical, down to the nearest six decimal places, stats on one of their backtest results:
Rolando schneiderman
Kohei Ozaki
Dmitri Slepov
Alain Hanover
Emil Tarazi
Tong Wu
Sahil Sundri
Sahil Sundri
long chen
Jingjing Guo
minglu xu
Anoop Hallur
stanley zhuang

They all also have paper trading results that diverge greatly from one another. So, someone even more transparent than Mr. Brown who not only overfits a completely different algorithm for the backtest than they intend to use in the paper trading, but also goes with 13 different user names so that they can try out a whole bunch of strategies. They are a little less greedy and perhaps understand the algorithm better, as their backtest returns are only 19.96% while taking advantage of low volatility and drawdown. This, combined with 29 different bites at the apple, means they can try a bunch of different similar low volatility strategies and chances are at least one will work out over 20 trading days. Your wait and see solution won’t work in this case. I believe there is a rule that states “4. Each Participant may have only one Quantopian account. If the Participant submits entries from more than one account, all entries will be disqualified.” So, are you going to disqualify them?

Its easier for Quantopian to ignore this and hope that none of these gamers ends up winning. However for participants to put the effort into entering they have to have faith that you’ll take gaming seriously and do your best to stop it. Both these examples are as clear cut as possible and while you may be “hoping the cream rises to the top”, in general hope is not a strategy. If you don’t care enough to take action on these, then you clearly won’t even come close to stopping the only slightly more clever gamers who at least bothered to tweak their various backtest algorithms and be a little more subtle. By ignoring these clear cut examples you've telegraphed that you’ll tolerate gaming, which means that the only way to win is to game, which means that almost everyone will now game in order to compete. Probably not what you want, especially combined with your better algorithm writers going away because they know they can’t compete with gamed results.

Of course I could be wrong and there is a reasonable explanation for everything I've posted, I just haven't seen it yet?

While I don't doubt there is some gaming going on, and if I were going to do so, I'd probably do it like you described, the more plausible explanation is that a bunch of people cloned the best looking algos on the boards, tweaked some parameters that don't matter, or didn't, and submitted those to the contest.

At the end of the day, the contest is just a way for Quantopian to draw out lurkers, gather more data on their live execution back-end scalability, generate PR and buzz, assist in seeding live track records for fund inclusion.... If they limit their losses to 10k/month worst case, that's less than the cost of a full-time developer hire, and so pretty good on a cost-benefit basis. Long story short, I think it's a mistake to take it all too seriously. If anyone is seriously losing out on the profit of their algo solely for lack of capital, they'd do well to actually try to raise the capital themselves and make a go of it.

For the most part, I think the contest winner is going to be a random selection from among the top decile of actual viable algorithms, depending upon the risk factors to which they are exposed and the behavior the market during the month of each contest. I think the odds of a dominant strategy surfacing are pretty slim. In fact, I think the odds of a strategy doing well enough for the contest month continuing to do well during their harvest months are also quite slim; I'd wager that the algos that win the contest month are ones highly leveraged and exposed to something quite infrequent which happened to occur that month, and therefore particularly likely to under-perform the following months.

I'm looking forward to seeing how it all unfolds!

I'm going to let Dan respond in more detail, but there is one comment I wanted to make quickly in response to Simon's -- note that algorithms that win the contest will get to trade with our capital for six "harvest months," not for one, which Simon's posting above seems to imply. That is, unless their net assets drop below $90k during that six months, at which point they stop trading.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Ah thanks, I had missed that part. So, potentially, Quantopian might have 600k of capital committed in a few months?

well if there's a down and out at 90k, they can cross margin

Do you have a reference to how to set that up at IB? I might want that for my own sub-accounts.

First off - I completely agree that it's important, even necessary, that particpants have faith in the contest. We're taking this discussion very seriously, as we have with the many others that have happened both publicly here and in private email conversations. I hope that my responses are helpful in convincing you (and other readers) that we are building a good, fair contest.

There's a very reasonable explanation for what you're seeing that isn't a nefarious one. Imagine that you're relatively new to algorithmic investing. You come to Quantopian, you read some of the site, and you click on your "My Algorithms" page. You find there are a couple of sample algorithms all ready for you. You click into one, and there is this big blue "Enter Contest" button. You press it, and you agree to the terms of the competition.

What's the result of that? A dozen entries with identical code and identical backtests. So why is the paper trading score different? Because they were all entered on different days! One came in on Jan 17, the most recent on the morning of Feb 2. Different start dates means different market conditions and different paper trading scores. With several hundred new registrants per week, this is a pretty common flow.

Furthermore, I can look at the list of people who did those entries. One of them was in the office last week for our hackathon; a few others pop up in LinkedIn searches. I can look at the referring URLs where they registered from. I can look at their email addresses. I can't prove to you that each person on that list is a real person, but I can give you assurances that I've looked and found them to be credible.

I'll go another step further with you down the leaderboard list. You've probably noticed five different people, one entry each, with identical backtest scores. I've got access to a bit more info than you, and I can see that one of the entries was shared publicly here in the forums. The creator of the algorithm entered it into the contest and so did four people who cloned it. They all have different paper trading results, too - again because they started on different days.

As I'm writing, I had another thought. Part of the reason that we're here having this conversation is because Quantopian has given you a CSV download with 30-odd metrics about each contest entry. That's a demonstration of our commitment to openness. Without that shared data, you (and others) wouldn't have any insight to the mechanism of the contest. I look around at other financial contests, and most of them have their leaderboard behind a login or even a paywall. We not only are sharing the leaderboard, but we're showing you the raw qualitative metrics that drive it. I hope that helps persuade you that we're trying to do this as well as we can.

Simon, not trying to get off topic but look into portfolio margin https://www.interactivebrokers.com/en/?f=margin&p=pmar

Dan, Good points. Thank you for the update. It would be interesting to see trends from the csv files say at weekly intervals. Of course, I could just put the data in pandas myself :)

Dan, so there IS a burn in period? The first month's strategies are in and sealed and NO more may be submitted for that first contest. And then above you said

"and there are 19 more trading days before we declare a winner. "

So, you won't pick a winner until the 2nd of March for this first contest? Or does your statement apply to the 2nd tranche?

There definitely is a burn-in period, I thought the only objection was that it wasn't long enough to completely eradicate potential back-test contamination if some people submit their algorithm at the last possible moment.

I see that now. I wonder if others missed that memo too? 20 days of feet to the fire execution should proof the balance of strategies. Enough for these first few go-rounds at least.

What a gas eh? This must be a first of its kind; this open, this fluid, this enticing. There have been fin-battles before, but not open to just any ol' body.

I have to agree with Linus that the Open is game-able, because of the way the backtest figures into the final ranking, and the ability to switch from a phoney algo to a real one. Frankly, it is discouraging, and hopefully can be fixed. Let's say I have a legit algo that runs well for three months in the contest, and I end up competing with someone who submits at the last minute a phoney backtest, and only runs live for a month. If our legit algos are equally good, he's gonna win, right? My two months of live performance become irrelevant, because he's conjured up a backtest that games the two months I was running live. Or am I missing something?

Thanks for the clarifications Dan. I guess Linus was just zealously responding to your request in the FAQ
What should I do if I find a way to "game" the contest?
We'd appreciate it if you tell us so we can fix the problem!

I am glad that this community has a bunch of passionate quants and that we are having an open discussion. I would wager that there is more "gaming" happening in the real world stock exchanges than here :)
The contest seems to have everything, drama, passion, mystery, suspense, leaderboard swings, and a pretty volatile market. I am hoping for a nail-biting finish to top it all :)
Cheers
Ajay

20 days of real time, gamed? I'm not sure how. Switching out code after 20 days? DateTime constraints embedded? One's backtest won't hold up after zero trading real time, not for 20 days. And then what? One's strategy prolly sucks. You lose, get kicked out. All for naught? If you're lucky and you pick a winner for a few weeks, that can't last can it? I can see this burn-in being pretty vetting.

I think there is still time for Quantopian to adjust the score I have a few suggestions:

  1. Start with everyone ranked 0 and use only live trading for the score till last day then apply the back testing scores. (Perform eliminating checks as normal)
  2. Use back-testing to gain an entry set of points which is not weighted, say Max point attained from back-testing is 100 (say relative to a benchmark (spy)) or points deducted based on some metrics high draw-dawn / volatility. Then it could be combined with the live trading score which could be weighted. Normalize the live based on SPY to be 50% / 50% so a back-testing that meets all checks got 100 points and if the algo just bought SPY it would also get 100 and reach 200 points.

I think all the worries here might play-out to be over-reaction, but the fact we had this skewed ranking could be avoided
I am also certain the rank will change significantly as the day pass.

What should I do if I find a way to "game" the contest?
We'd appreciate it if you tell us so we can fix the problem!

What we need to do is start publishing ways of gaming the contest, by posting algos. Then, everyone could just use those for their backtests, which would make the current scoring method pointless, and a new approach would have to be worked out. If we all become Charlie Browns (assuming that he's cheating the backtest portion...no way to really tell), then it would force a re-thinking. I'm being a bit facetious here, but I'm concerned that this is the path we're on. Once a few standard backtest gaming techniques are released into the wild, it'll be awfully tempting for lots of folks to apply them. And since the competition is open to everyone on the planet, it's bound to go in the wrong direction in a big way.

Would it be an issue if the competition was only judged on live trading results only?

If the competition requires a minimum amount of time to differentiate between algos could the competition length be extended?

there be more than just monthly competitions? Could a competition that is run over a quarter or half year be setup to incentivise algorythms with longer time horizons?

If Quantopian is trying to build a fund is it advantageous to have different sorts strategies within the mix of strategies?

Chris,

I totally agree with you. There should be ongoing monthly, quarterly, semi-annual, and annual competition.

@Grant K. Are you proposing of publishing zero day algo hacks! I am aghast! [grin]

Isn't that what the net's all about -- free and easy information?

I have a quick suggestion on how to make this contest fair - the winner's algorithm gets checked for cheating. If everything checks out, then everything checks out and the contest has a fair and honest winner. I really do think this is necessary, because in some circumstances, results seem almost too good to be true. People have been hauled in front of the SEC on suspicion of fraud for results not half as good as the ones we're seeing here.

Michael,

I totally agree with you as the results are too good to be true. I have made similar suggestions on another post:

https://www.quantopian.com/posts/quantopian-open

Below is the comment:

I totally agree with you regarding the members' privacy as many members are not willing to share their algos. This is fair.

Being it is a competition, there must be some transparencies. I think that this is a lesson learned, and it should be revised for future competitions. As a community, we are expected to contribute and learn from each other's, and the competition is a great way to do it. As a community, we can't grow and improve if the results are not shared. If members are not willing to share, then they can opt-out of the competition. If there is no openness for the competition, many members from the community will question whether the participants and their results are for real?

I would like to offer an alternative solution. For every even month, i.e. February, April, June, and so on, all competition trading results are hidden. For every odd month, January, March, May, and so on, all trading results are open to all members. This way for members who are extremely private, they can enter the competition during the even months. It is a reasonable request.

Here is a sample gaming algo. It first defines a cheating date range, during which a phony algo will be used to hack the backtest for an astronomical backtest score. After the cheating date range, a real algo comes into play.

In this sample, the phony algo works like this:
1. Fetch Yahoo data for SPY
2. At everyday's market open, if SPY's close price (obtained from Yahoo data as a look-ahead cheating) is larger than the open price, long SPY. If SPY's close price is smaller than the open price, short SPY. If open and close are equal, do nothing.
3. At everyday's market close, close the SPY position, no matter it's long or short.

The real algo in this sample is simply long SPY.

In this attached backtest, I specify 1/15/2015 to be the end of the cheating date range, so you can see before this date, the algo performs exceptionally well, but after this date, it just tracks SPY.

If I were to use this algo to game the contest, first of all, I would wait till right before the submission deadline, then change the end date of the cheating date range to be also right before the deadline, then I submit the algo and my backtest part is guaranteed to have an ultra-high score. Second of all, I would submit two of them. The phony part would be identical so the astronomically high score of the backtest is kept. The difference is in the real algo part, where one I would long SPY and the other I would short SPY. As a result, during the paper trading month, SPY may go up or down, as long as it doesn't go flat, one of my two algos will stand out to have a modestly good score. (Sounds like I need to come up with a third real algo to win in the case SPY goes flat.) In the end, I have an astronomically high backtest score and a modestly good paper trading score, my total score has a good chance to win the contest (if other people are not cheating like me).

Since I've posted this, I'm sure you can improve this sample gaming algo to cheat even more, so now my sample algo has no chance to win. I need to come up with another super-cheating algo and come back to beat you guys :p

Have fun and let's see how Quantopian deals with this!

Bravo WJ D.

I can say with strong confidence that sharpe 10 strategies are more or less unattainable with quantopian's infrastructure. Those types of sharpes are even rare in high-frequency where time scale is reduced and convergence happens much more quickly. Furthermore, even if somehow you were able to achieve sharpe 10 on 1min bar last trade data, I have no idea why you'd enter into a contest and not just run it for yourself.

There's an easy way to avoid cheating. Only take into account algos trading with real money and a minimum account balance.

I realize the look-ahead cheating idea may be able to extend to the paper trading part of the contest, depending on how the paper trading system is implemented. According to Quantopian's documetation, the paper trading uses 15-minute delayed data. Some people can get real-time data from their broker or other sources (unfortunately I'm not one, so I can't provide a test code here), so they can use fetch_csv() to get the real-time data and compare with Quantopian's 15-min delayed "current" price, and position accordingly and close the position 15 minutes later. This loophole doesn't exist if Quantopian fetches external data 15 minutes before executing handle_data() in paper trading. Or if the loophole exists, it can be easily fixed by pre-fetching external data 15 minutes before handle_data().

The best solution is the one Michael S mentioned IMO - algo owner puts up first loss and quantopian acts as a provider of leverage.

Wj D it would actually be hilarious if someone went that far to game it. But it's not that complicated depending on what libs/functions are available in the platform. You could simply scrape yahoo's rt numbers or access one of arca's public servers and get "rt" book snapshot.

Hmm, I thought Yahoo's "real-time" data is also 15 to 20-min delayed. I have no idea about arca's public servers though.

http://datasvr.tradearca.com/arcadataserver/ArcaBookData.php?Symbol=SPY

Yahoo has 2 feeds displayed one is delayed and one is "rt", probably snapshot though.

This is a bit off-topic, but Google Finance is real-time.

Thanks, ax tx. This gets more interesting. If Quantopian approves this method to be legit, I would actually go that far to code and submit to the contest. Guess who will laugh then :p

For the record, look-ahead gaming won't work for the live trading portion of the contest. The fetch_csv() function can only be used in initialize(), and initialize() is called before the market opens.

(As you might imagine, we at Quantopian are following this discussion closely, and we will have more to say about it, but in the meantime I felt it wise to clarify this particular technical point.)

Ahh, no magic with the fetch_csv()! So sad :p

At least now the paper trading part is trustable.

It looks like its not too hard for the user to change the algorithm at any time, including the day after the contest ends and the winning system goes live. Quantopian is really offering the winner in the money call options on $100,000 worth of stock for 6 months from now with a knock-out 10% below the strike price. The rub is that the options aren't priced based on BS or implied volatility, you simply divide $100,000 by the price of the stock to determine how many options you get. For example, I could chose GPRO with an implied volatility of 55% for Jul 17th options and a current price of $52, I would get about 1900 shares. To buy call options on 1900 shares for GOPRO through Jul 17th, given current ATM call ask of $8.80, would cost me $16720, so Quantopian is giving me something that is worth roughly $16,000. If I decide to be safe and chose IBM instead, with around 20% implied volatility and a price of $156, I can only purchase 640 shares which at a current call ask of $8.65 is worth only $5536. Since something the market values at $16,720 is worth more than something the market values at only $5,536, the most rational financial choice anyone who wins would be to implement the most volatile strategy they can the day they start trading for real money, since that will give them the highest expected value. To do anything else would be suboptimal to the user. Obviously the knock-out impacts the pricing somewhat, but it still leads the risk neutral investor/player to the highest possible volatility scenario if they wish to maximize the value of trading $100,000 of Quantopian's money for 6 months.
I want this to succeed as most of us do, which is why I'm posting this critique. Its going to be very hard for that to happen if the profit maximizing strategy is so opposite of what is good for the contest and Quantopian. I'm hoping this is figured out because we're all better off if Quantopian succeeds. I think that Michael S has at least the best solution so far of requiring first loss money buy-in for the winning algorithm. I don't know if it needs to be $10K, I think even $500 would be enough to dissuade 95% those who aren't trying to game the contest. I'd add that Quantopian can just move down the stack until they hit an algorithm where the writer is willing to put up the money, so they don't have to worry about there not being a winner one month. The other option that Quantopian can explore is hiring a third party to check the algorithms. There are a number of accounting firms that routinely audit hedge fund results so that the LPs know they're not dealing with another Madoff. Their continued business depends on their ability to convince the funds that they won't release any trade secrets, so I know that I would be fine with someone like that seeing my algorithms, especially compared to the alternatives of a contest that doesn't work.

The prize could instead be "sufficient funding/leverage such that the winning algorithm's daily Conditional VaR is $5,000" or something.

EDIT: I like the prize winner ante idea though, much simpler!

So this is sounding like either:

1) No use of fetch_csv, whatsoever. or
2) No incorporation of backtest results for judgement.

Or am I missing something?

No use of fetch_csv still leaves open massive optimization and perhaps other shenanigans.
No back test results is looking like the only way.
If one uses this new criteria, what does the leaderboard look like then?

Um, clarification Q, didn't you guys weight the various metrics differently? Or did you give like the Calmar ratio even weighted with the Sharpe or Annualized Return% ? Because they look like they're all evenly weighted.

I believe that we are asking for a bit more transparency. It is a win-win for Q and the community.

woo hoo #7 . Anyway, after the contest ends the admins can weed out the cheater, but that would involve having to see the algorithm

Excluding fetch_csv won't help, because the signals can just be pasted into the algo (i.e. long on this day, short on the next, etc.).

Yes, grant, that will work for gaming backtest results, but we're talking about gaming live trading results.

No use of fetch_csv still leaves open massive optimization and perhaps other shenanigans.

Hacker's gonna hack.

Quantopian, the wild wild west of the trading world.

Jonathan,

Sorry, maybe I didn't follow the thread quite correctly, but, yes, I understand that gaming the live trading would require a loophole in your engine (e.g. some way to apply a custom slippage model without being detected). As I see it, there would be no way to have a look-ahead bias, as presumably some contestants have applied liberally to juice up their backtests.

Grant

First, mea culpa on my second post. The explanation of why they all match makes sense.
That doesn’t change my disappointment at the fact that Charlie Brown in still in the contest even though there hasn’t been any attempt at an explanation offered for what appears to be clear gaming of the system. Actually the evidence is building he is pulling a “game the backtest then do opposite high volatility” as outlined in post 1, since two of his strategies still have almost equal opposite performance numbers even after 3 days, after posting exactly the same backtest numbers.
And now let’s look at the next vulnerabilities of the system to gaming. It turns out even paper trading isn’t safe the way it’s currently set up, and Dan revealed how without probably meaning to. Why? Because you can start paper trading, enter and then remove as many algorithms as you want into the contest before it goes live. You simply fire up two high volatility but opposite strategies today, let them trade for a few hours or a few days, and see how it turns out. One will kill it, the other will do very poorly. You terminate the poor performer, then immediately start it up again as a new algorithm. Rinse and repeat until it too has a good pop. Better yet, start up a set of half and half opposite high volatility strategies every hour every day and enter only two highest performing ones. Eventually you’ll have two opposite high vol strategies with a nearly perfect 30 day paper trading record that is worthless in predicting real results. Now no need to even manipulate the backtest, which is by now we see clearly trivial.
How to prevent it? First make it clear that gaming isn’t tolerated, which means taking decisive action against clear gaming like Charles. Second, you probably can’t allow more than 1 algorithm per person. You also probably need to monitor the number of paper trading starts and shut down anyone with an abnormal number. The bad part is that everyone can use this strategy to a small extent to game the system. Its just natural if you start your paper trade a few days before the contest starts and your strategy has a bad day that you’ll terminate it and try again the next day. A small edge maybe, but in the end the winning algorithm is probably only going to have a small edge over second, and here second place doesn’t even get a set of steak knives.

Hi everyone,

I just wanted to let you know that we are working on this. My goal is for us to respond in two steps: first updating our judging process to make the types of gaming exploits discussed here more difficult; and second to define more closely the criteria to disqualify for gaming. I think it is important that we be deliberate and clear, and so I'm willing to take a few days to do this work. The discussion here has been extremely helpful. The feedback here is the latest proof that transparency propels us forward. Every time we disclose data, the community responds constructively. I'm very grateful for your advice, and the very constructive approach taken here. More soon.

thanks,
fawce

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I understand that it's less fun, but maybe allowing only ONE algo per month will avoid gaming.

What we are missing is that a gamed algorithm won't work on live trading and even if a gamed algo wins which I doubt it will, the winner wont get anything since it wont make any money. Of course everyone wants to win but I have the feeling that the goal of the competition is to get more engagement and find good algorithms.

Not to find one, like they said they want a robust diversified fund. 100k with a 10k risk is not that much for the potential of this competition. I am sure that the metrics will tell gaming algos from real ones.

So another suggestion is instead of one to take top 10 to the next stage OR pick the best uncorrelated 10 or something like that. Increase the 100k to 500k and distribute it to 10 algo with less leverage. so 10 algo with 100k limit with max 1.5 leverage should be the same as 500k with 3* leverage. That way quantopian should be able to build a portfolio much faster then picking one algo each month.

+1 for only allowing ONE entry per month. With 6 months currently planned that would be plenty of entries I would think to try out a few (legitimate) ideas.

+1: only lodge your best algo, not 3 versions of the same which perform depending on the market conditions

+1: top 10 idea of Lucas Silva

Disclaimer : I have been in top 10 some days (or hours) but I'm not anymore.

Given the actual rules, what I would do to game the contest would be a look-ahead backtest as the one given by WJ D.

Then I would look at the most volatile asset of the universe and enter two algos in the contest with a hard rule for one algo to buy the asset with leverage 3 the first day and for the other algo to sell the same asset with a hard rule to close the position at the end of the day and do nothing else until the end of the contest.

After the first day I only keep the algo that won money and if the asset was not so volatile that day I can relaunch both algos the day after given that I take a week before the end of the contest's submission to get the best possible result. I can even keep the best result so far and use the another two slots to try to improve the result for free. If the asset is that volatile I can hope to make around 10% in one day with the leverage which gives an annual return of 100% at the end of the month, not bad, with almost no volatility, a great stability and sharpe ratio and infinite calmar ratio. With the gamed backtest that still counts for half of the final result I have a great probability to win the contest.

Final part is that from the first day of real trading I switch with hard code to a real strategy, the more volatile the better. It can be any strategy but let's say for the example that I just buy SPY with leverage 3 the first day and do nothing else. My goal here would be to use the knock-out call given for free by Quantopian. If I reach the 90k barrier I hope it will be during the first month so that I can enter again the contest as soon as possible.

Else if I am lucky my leverage 3 high-volatility strategy can make 60% in 6 months, so $60,000 for a basic, no-brain strategy. And I can enter the contest again the month after with the same strategy. Sure it won't be the best strategy to be part of Quantopian funds but the outcome is pretty good for the gamer.

Another one someone could cheat could be to make multiple accounts here and just register 100's of algos

Thanks Fawce,

For the Managers Program (https://www.quantopian.com/posts/quantopian-managers-program-algorithm-selection-and-compensation), I see that an "Interview and/or due-diligence questionnaire" would be required, which I figure would stop short of sharing code, but would require revealing significant strategy details (with the option of sharing code, I suppose). Candidate Managers would need to make the case that their trading approach is legitimate and warrants funding. I'm imagining that at a minimum, you'd fly folks into Boston for a day or more, for a face-to-face along with asking them to give a slide presentation pitch, before giving them suitcases of cash to invest. You might have lunch/dinner, get to know them a bit. The standard stuff prior to entering into a serious business relationship.

I'm wondering if, in comparison, the Open is just a bit too open and easy, and that there needs to be a similar day of reckoning, as incentive for competitors to not waste energy on gaming? Maybe you could require white paper submission for a set of the top algos on the leaderboard followed by a phone interview for the selected winner? It seems like something along these lines would make it more difficult for gamers to win (although the leaderboard would still be riddled with spurious entries, if the rules aren't changed).

Grant

Someone enlighten me: If you submitted an algo and its paper trading, you get the chance to modify and restart the algo during the paper trading period?

After the submission deadline, you can stop an algo, but you cannot restart it in that contest. For example, if you stopped an algo now, and resubmit it, you would lose the entry in the February contest, but the algo would still be in for March.

RE: private code.

I have built strategies for a dozen banks/hedge funds (in my prior job). They signed NDAs with us and we signed NDAs with them. They gave us proprietary knowledge of trading mechanisms so that we could build their strategies. This was never an issue and is a common practice. Information does get shared, but in a protected fashion.

As part of your due diligence I would think Quantopian would REQUIRE access to the algorithm's code. Either the author grants access under an NDA signed by the Q, or not. If you want to play you have to show your hand after you win. Eliminate all of this contention and just make that part of the stipulations for selection. This whole contest is outside Q's normal business practices. Nobody would expect you to maintain those for this usage pattern.

Market Tech is correct - With a black box you'll never know what's going on, which is fine so long as the algo-writer's money is also at stake (with the exception of insanity), but for the competition, with no down-side for the winner, I cannot think of any implementation for detecting decisive 'strategy' change which could not also be gamed with foreknowledge.

I think you're on the right track, but remember that an NDA has to be enforceable to be worthwhile. If I'm a bright but poor college student I have no way of proving Q is misappropriating my code and wouldn't have the resources to sue for damages if I did find out. This is a much different story than the hedge fund that can and will recover from a developer who leaks info covered in an NDA. That's why I like the idea of a third party accounting firm that does audits for a living, the reputational risk for a big publicly traded firm and lack of any reasonable motivation for them to violate confidentiality makes me trust them a whole lot more than an NDA with Q, much as I like and trust the Q guys!

@Kevin Q., Trust is a terrible thing to waste...

You're right. Profitable trading logic is probably the most tenuous asset in existence. But if your algorithm is so profitable, why share it at all? If you put your heart to it your algo can build a reputation and you can then acquire funding. It can be done. This whole experience demonstrates some of the most basic of human traits -- it's fascinating. Altruism, greed, curiosity, anticipation, trust...

I agree that you should be able to get funding with a good algo that you trade yourself, in theory. I have almost every education/degree/networking/wealth benefit possible to allow me to do this, and it would still think it would be a tough road to go down for me if I tried to pursue it. If I was an absolute genius 22 year old from a poor family in a small village in Mongolia, there is pretty much a zero chance I could pull it off. In my experience there aren't many hedge or PE funds who will even consider hiring, let alone putting money behind someone who didn't go to one of a handful of schools AND pay dues working 80 hour analyst/associate weeks as a 20-25 year old. All the while still somehow sincerely believing that finance is completely egalitarian and the best ideas/people will always rise to the top, but I digress. Q is doing a great job exploiting this blind spot, which I think is awesome and a great strategy. Its also why I lean toward an outside expert audit, vice skin in the game which would eliminate the really poor but brilliant who may be the biggest untapped talent market.

I can see several elefants in this room when I think about this whole "cheating" issue.
It just makes no sense, you'd need quite some skills to code a unsupervised system that is able to dynamically change while it's cheating it's way through 3 different challenges all while pulling in perfect results.
If you have such skills why would you choose to spend alot more time to cheat and risk loosing the possible reward?

Since overfitting is a pretty common mistake almost everyone makes when getting started with algo trading it sounds alot more reasonable to me that our #1 isn't a pro but instead did spend alot of time backtesting and got lucky.

The ultra high papertrade return is a little strange but I think the key is to look at the count of trades it made.
Since it basically got thrown in to the cold water it probably just put all money into one trade, which happened to generate some nice returns (AMAZON. i bet) this would explain the extreme sharpe, stability and 0 drawdown. And since we calculate ANNUALIZEDReturn with just one trade and big return, we get some crazy value.

Sounds alot better to me than some pretty good programmer and trader that doesn't care about the potential profit because he switched to the darkside and just want to see the world burn.

I think some people also underestimate the kind of traders/coders such contest and the offered reward attracts.
Usually you won't see the guys who know how to code profitable algos, the less is known the better. But since this contest is free and offers a nice reward and platform, it's worth to enter. So suddently there are people with huge returns and everone is going crazy.

Just wait for the bt score to drop, the monster is still hiding :)

Anyway I actually have no problem to show some code if it's needed as long as I can be sure it won't somehow leak (NDA or smth.).

I have access to a Bloomberg terminal; hence, it will interesting to compare the execution time and price of the trading results.

I understand why Q wants a backtest period to factor into the results-- because the live trading time is way to short to draw any meaningful conclusion. Live trading is valuable because the results are unbiased though the sample size isn't big enough. It is highly likely that using paper trading only would lead to a random trading system winning, which adds no value to Q. This is true for any short running contest.

But the current way of incorporating backtested results simply ensures that the over-optimized systems start the contest with the highest scores. This is counterproductive. What would be better is to insist the backtested results are compensated for data mining bias. It seems Q is in a good position to force this given the contest in on their platform.

Is quantopian allowed to participate on their own competition? Maybe the would place their actual algorithms to check how it rank against others?

I think they should be allowed to participate but not to get any reward ;-)

Regarding reviewing code, the problem is that Quantopian can't just grab somebody off the street, pay them minimum wage, and sit them down at a workstation. It'd need to be an expert. Even if this expert signs an NDA and is as honest as Abe Lincoln, he will learn from the code. Down the road, he will benefit from the learning, even if he technically adheres to the terms of the NDA. In fact, he may not even be aware that he is applying what he learned; it may just be an intuitive hunch that resulted from the prior review of code. Or he may just rationalize any misuse of information gleaned, "I'm not really applying the same technique, since I added my own improvements that make it unique." Once the information gets into the head of a subject matter expert, NDA or not, it's gonna get used.

Another consideration is that it is very easy for look-ahead bias to creep into a backtest, even if there is no intentional gaming. To compete, you get a two year backtest followed by a month of simulated live trading, and then 6 months of real-money trading for the winner. The timeframe is short, so it makes sense to pick stocks that performed well recently (e.g. over the past 6 months to a year). But of course this choice is biased for a backtest, so would it be considered gaming? Or rather good judgement in light of the contest rules and incentive?

On https://www.quantopian.com/posts/quantopian-open-leaderboard, Dan Dunn comments:

The good part about the paper trading component is that it's entirely
out of sample and can't be over-fitted. The bad part is that we don't
have the patience to wait years to determine a contest winner.

It would be interesting to hear more from Quantopian about their timeframe constraints, which seem to have driven the contest rules. Aside from a lack of patience, was there a business case to have such a short paper trading period? Does the contest timing somehow relate to the availability of funds? Your anticipated launch of the Managers Program? In other words, how do you see the contest fitting into your business plans? What if you'd required a 1-year paper trading period, with a complete phase-out of backtest results in the scoring by the end of the year? How would this have been non-starter?

The rules state:

All former and current employees, interns and contractors of Quantopian; and their immediate family members; and their household members are prohibited from participating in the Competition.

If you work for nearly any company in the securities industry your personal trading is severely restricted. I don't know for sure, but I'm pretty certain that the people who audit hedge fund results are probably prohibited by their compliance department from trading in anything but mutual funds or an account controlled by a third party that they can't direct. I do know for sure that this is the case for investment management firms and they are pretty strict at enforcing it; audit firms would be motivated to be even more concerned. Obviously they can eventually leave and do what they want. I'm willing to take that risk knowing the difficulty that person would have getting enough funding to screw up my strategies from anyone in the industry that just shared all their secrets with him, he'd be a lawsuit lightning rod even if he wasn't using other's algorithms.

-1 only one algorithm per person. 3 is good and it has no relation to the gaming problems.

Lots of benefits and risks to balance here.

Benefits:
Limited paper trading duration gets algos up and running quickly, agile iterative software applied to trading.
Extended back test duration, gives general idea of future performance.
NDAs for code examination of winner(s), proper due diligence by any investor would require knowledge of trading mechanism.
Three strategies per user name, better chances of winning.
The fact that this experiment even exists -- gives the incentive for young quants to have a go, and it's great advertisement.
Hundreds of concurrent real time strategies tests the stability of the system as a whole.

Risks:
Limited paper trading duration contains little statistical significance for handling the coming volatility.
Extended back test duration, massive gaming hole.
NDAs for code examination of winners(s), potential proprietary knowledge leak.
Three strategies per user name, more gaming holes. (Plus the concept that nothing stops users from having many logins.)
The experiment is high risk, potential loss of 10% of capital in 6 months (20% possible annual drawdown).
Hundreds of concurrent real time strategies is forcing additional infrastructure to be added/tested without the potential for income.

@Grant, You've won! The Q wants to use your algo to trade. One condition, they have to vet your code to make sure you're not violating Open policies. Are you gonna let them have a look (part of the agreement) or are you gonna keep it a secret and skip participating? Strategy viability wanes and waxes over time. A strat that works today probably won't work next year, but may again in two. Information leaks regardless of DefCon5 security measures. Again, if one is so convinced that one's algorithm is "The Holy Grail" of strategies -- why risk it here at all? Hacks occur everyday in the world. Who knows, maybe Q gets hacked this weekend and everyone's code is published by the North Korean Army. Or more likely, an insider. Life's a risk, roll the dice.

Hi,

I have an update on the contest judging process.

The contest is a preliminary round for selecting algorithms for the Quantopian hedge fund. For the hedge fund to invest in an algorithm, it requires a degree of confidence that the algorithm will be profitable. That confidence is developed by testing - backtesting, forward testing, and stress testing. That testing pattern only works if the same algorithm is being used in all of the testing areas.

Unfortunately, we had an entrant in the contest who didn't follow that paradigm. There was, effectively, one algorithm submitted for the backtest, and a second algorithm for the forward test. We were able to determine this without looking at the code. The data exhaust from the backtests and paper tests doesn't leave any room for doubt. We have disqualified that contestant.

We're also evaluating other methods to detect this kind of chameleon-algo entry in the future. We've been working on some testing and statistical methods to make the evaluation more reliable and quantitative. We will have more to share on that in the future.

We've read every suggestion made about the forums with great interest and appreciation. We are exploring and testing many of your ideas, and we will update you again as we conclude what further changes will be made.

thanks,
fawce

Hum that is kinda of a brave claim. To detect what the algorithm is doing looking on how it perform. Now how can we be sure that many doing the same will also be detected ?

Bear in mind they also have access to all the trade history, so presumably they are doing some of Thomas Wiecki's Bayesian modelling on the prior backtest trades vs the paper trades, or at least that's how I'd try to approach it!

Hmm, looks like Charles Brown disappeared from the leaderboard...
Did anyone notice any other people disappeared?

@Market Tech,

Supposing I win (probably just jinxed myself), and Quantopian requires that I reveal the code, I'd ask for an additional $100K of capital, as compensation for their changing the rules mid-contest. It would be un-American otherwise, and it would make the whole thing even more sensational--a win-win!

Grant

@Grant K. Hear hear! And no doubt they would award it too (given your rep. and everything).

Yeah, and then the market would immediately tank, and I'd have nothing for my efforts!

Here's rooting for ya Grant!

@MarketTech -- DefCon "5"? well, then no problem, little security measures needed!, as Sheldon Cooper reminds us, in case you haven't seen the War Games movie in a while... :)

Anyone find it humorous that Linus Van Pelt was knocking on Charlie Brown?

Great discussion, I'm sure the Q is appreciative of the pros/cons.

After all this contention gets curtailed, I wonder if maybe picking 2 winners each month, having them trade $50k apiece for a month and then whichever one leads at the end, gets all $100k, might not be an option to eliminate some of the gaming risk.
And taking that idea further, IB's minimum PDT account is $25k, if the Q picked 4 winners, spreading the risk even thinner, then after 21 trading days the best of the bunch accumulates all $100k.
The runner's up could perhaps be awarded some stipend or even some portion of their algo's ROI.

Spreading the risk across multiple algos seems prudent don't you think? And with four winners, more love to spread to all those starving quants. Yeah, I get that there is a new selectee every month, but still, the more algos trading real money the better off the Q will be come fund building time.

@Ken H. Your name sounds really familiar; TWS list maybe?

Glad to see that some folks are critiquing these aspects of the "Open". But it is all rather... confusing.

I find myself wondering - in what universe does a real hedge fund / prop shop fund or pay a portfolio manager or trader without

a) having full access to their research and code and/or
b) simply OWNING all that reserach and code

??? (h/t Market Tech for saying this, I think, in another way)

Seems like Q is saying "we want to try being a 'quant' hedge fund, but we don't really have the right people or capital to hire the appropriate portfolio managers, traders, programmers, etc., so we're 'crowdsourcing' it; offering to pay commision on winning strats, but no salary, benefits, etc." Grant's point about asking for more $$$ is very much to the point here: you want IP? Then actually pay me.

Let's say your algo is chosen, and makes money (20% + pa for the sake of argument). What are the contraual responsibilities between the parties ongoing, and how does it play with Q's board, backers, etc.? If the "rules" get changed down the road, why should the "quant" want to stick with it? Outside of the potential (as Matthieu puts it above) of some chance of "free money"?

Now, the crowdsourcing idea is not a bad one per-se, but the rest of it... strikes me as a startup thrashing around (as so many do, for good and for bad) looking for a revenue model that works. But it doesn't look much (to me) like a proper hedge fund or prop desk. Ok, if you say "Q is trying to be different and for the better", ... well, sure, maybe, but it can also lead them into such rarely trodden paths that all parties can get into nasty trouble (legal, etc.) with no normative ways to deal with such problems. From a lawyer's perspective, why go there? Why create the risk?

Personally, I find the platform too clunky and frustrating to push much more, but I do enjoy browsing this forum for ideas and seeing how people come up with pythonic approaches to different tricks I usually find in R, Matlab, etc. out there on the web. To that, I hope Q finds a way to stay in business. But I do wonder if this contest and "fund" concept are thought through sufficiently to be viable as a business, and not indeed a potential basket of liabilities.

@Michael F. I think your post contains good points.

I believe the biggest source of angst here is time. All this just takes so much time to resolve. We're all impatient to see the machine start cranking out dollar bills, ka-ching, ka-ching, into somebody's wallet. Anybody's wallet. Arrgh! but the financial world moves so slowly.

just a correction, many prop shops and multi manager funds allow people to trade with the trader retaining full ip ownership.

@Michael S - re: prop shops, mea culpa, I should have recalled that this is indeed the case for them.

Surprised you would say that about funds, but I will defer to your (presumably) greater knowledge and experience. I do wonder how they structure the full relationship then, if it is not simply a split of tradeable funds/costs, but that is probably too off-topic for this forum (or perhaps not if it is going directly to Q's business model... but then I'm not on its board).

@Market Tech - maybe for some folks it is about time... to me, it looks like uncertainty about rules, structure, expectations about relationships and responsibilites.

Experienced prop traders with a long and stellar trading track record may be fully funded by a prop-shop's own capital. And such traders may show up with their own techniques and strategies and never be questioned (until they lose serious capital). Inexperienced traders that are funded to run money would normally have to put up some portion of the capital they will trade. No skin, no win. These traders also may be allowed to trade in the dark. But the third case, unknown traders, with minimally tested strategies who put no capital up for risk are not going to be allowed to trade the fund's bank without full disclosure of their trading mechanisms. The Q-Open is very much in the realm of this third case.

As confiirmed by Alisa on https://www.quantopian.com/posts/15-minute-delayed-live-trading-orders-cancelled-at-end-of-day, there is a significant difference between backtests and Quantopian simulated live trading, and real-money execution at Interactive Brokers (IB). With IB, at the end of the trading day, Quantopian cancels all open orders automatically, but under backtesting and simulated live trading, the orders remain. It's perhaps a bit of a stretch, but this fact could be exploited by a clever algo writer as a way of gaming the competition. For example, say he has a strategy that works well, but requires orders to remain overnight. He could ignore the fact that his algo is completely invalid for real-money trading, and use it for the backtest and paper trading portions of the contest. Then, assuming he wins, he could switch to a viable algo for real-money trading.

There is a weaker form of this gaming approach, whereby a contestent runs his algo with and without code to cancel orders at close (see example code posted by Paul Perry on https://www.quantopian.com/posts/15-minute-delayed-live-trading-orders-cancelled-at-end-of-day). Say the code without the cancellation performs a bit better. Then he's faced with the dilemma of giving himself an edge in the competition but running afoul of the no gaming rule, or fulling aligning his code with real money trading and not cheating.

Aside from the remote possibility of this being a significant gaming exploit, I would be concerned that it'll taint the statistics that Quantopian is building up across all competitors. So, why not eliminate the discrepancy by adding automatic cancellation of orders at day's end to the backtesting and paper trading engines?

@ Michael Fischer,

A bit off topic for this thread, but here's some feedback on the Quantopian Manager's program:

Note in https://www.quantopian.com/posts/quantopian-managers-program-algorithm-selection-and-compensation under "Qualitative Methods" that as a Quantopian Fund Manager candidate, they basically want to know all the nitty-gritty details of the strategy. One would think that a Quantopian lawyer will draft an NDA for all parties, if this is gonna follow typical business practices. Also, there is a kind of coercion at play, because say I'm competing (most likely poorly) against some hot shot traders to be selected as one of the Managers. Why wouldn't I just reveal my code, to give me an edge, reducing the risk to Quantopian considerably. Also, I might want someone at Quantopian to go over it with a fine-toothed comb and make recommendations for compatibility with their platform, present and future. And if they could make suggestions for improving performance, it would be in the interest of all parties.

Also, I wonder what Quantopian is expecting with regard to negotiations? Or are they thinking they'd lay out terms--take 'em or leave 'em! Along the lines of my comment above, code revelation could be a very powerful negotiating tool, "If I show you my code, I'll need twice as much capital, an equity stake in Quantopian, and a six pack of my favorite beer...well, make it a year's supply!"

Grant

Charles Brown has disappeared from the leaderborder?
EDIT: whoops, missed the earlier post sorry!

Hello Fawce & Co.,

Any more thoughts on how to make the contest less game-able or un-game-able?

Above, you state "The contest is a preliminary round for selecting algorithms for the Quantopian hedge fund." However, there's nothing in the contest rules to this effect that I see. But if this is what you're after, it seems that they should be amended. The problem is that given the possibility for gaming backtests, you'll eventually want to look at algos with longer live trading track records (e.g. 6 months or more). My read is that there is a strong possibility that without examining code and other details, you won't be able to suss out cheating. So, there should be an incentive for contestants to keep their best algos running indefinitely. As it stands now, there is actually a disincentive to longevity, since stopping an algo for the next month's contest and re-optimizing its backtest would give a competitive edge. However, if someone is actually interested in making a case that they have an algo worthy of investment by the Fund, they'd be prudent to just let it run, despite the present contest rules, since you'll eventually only be interested in long-running algos (since backtests are game-able).

Also, in case someone missed it, Kevin Quilliam posted another potential gaming scenario on https://www.quantopian.com/posts/new-rules-and-modifications-for-the-march-contest.

Grant

As I recall it, it was the real money track record of the winners that was going to be eligible for the fund.

That said, it's probably a good idea to leave algo running since I suspect they'll change the scoring to somehow shade in favor of them. And besides, it's good fun. I am hoping they add more stats to the leader board, including time running live and little sparkline equity curves.

Had a long commute today and used it to come up with the following set of straw-man rules to prevent gaming, hopefully will stimulate even better ideas. Ordered in rough precedence and some rely on others to work, i.e. you need to do 1 if you're going to do 3 for 3 to really work. Obviously it won't please everyone, but it tries to minimize the pain to ensure a fair contest.
1. Disable all date functions in code. Not scheduling functions, just functions that allow you to know today's date. Alternately, if you use any date function you agree to allow Q to look at your code if you win before they commit $ to your strategy. Prevents date based lookup tables to game backtest, and prevents changes in strategy between backtest/live/real $ trading.
2. If you win, you agree to allow Q to examine all csv uploads and you agree to explain them if they aren't obvious. Note you could obfuscate your algorithm by uploading a bunch of data you don't actually use, but you have to be able to explain all of it and how it could conceivably be used.
3. Backtests are done as a series of random draws of twenty-four 30 day periods, x from 2014, x from 2013, x from 2012... to synthetically create a 2 year period The random draws are with replacement, i.e. its conceivable that you could end up with June 15th-July 15th, 2014 and June 1st to July 1st, 2014. All algorithms are scored against the same set of random draws, but the draws and scoring aren't done until the day the contest goes live so algorithms can't be optimized against them. This prevents a bunch of backtest gaming/overfitting strategies.
4. You only get one entry. This prevents multiple entries with nearly opposite strategies.
5. Backtest doesn't go away until an algorithm has run for at least 6 months in the contest. Algorithms get more and more weight the longer they've run, i.e. an algorithm that has run in the contest for a year would end up beating an algo with identical results that ran only a month. Would need to think about how that weighting happened.
6. Your run time only counts if its contest runtime, i.e. you had your one and only one algorithm entered in a contest that entire time. Otherwise someone could just start 100 different algorithms running now and in a year they'd have at least one that was doing really well that they'd then enter.
7. All special dividend and spin-off stocks are excluded.
8. All algorithms are run for x days after contest close and all dividends applied/deducted back to the in contest longs/shorts as of the ex date. before determining winner.
9. Leveraged ETFs allowed if you meet appropriate reduced margin standards. OK, this isn't a gaming issue just a pet issue of mine because I have some great 2/3X strategies!

+1 for Kevin's idea

+2 for Kevin as well. Also, (and maybe this is already mentioned somewhere above) why don't all algos in the competition go live at the same time? As mentioned in this thread, your start date can have a big impact on your performance.. I was surprised that we all didn't go live at the same time.. 2 cents.

Simon,

Sounds right that it would be the real-money track record for eligibility to compete for a Fund manager spot. But to make it to that point without your own capital, you'd have to win a contest round.

Time running live would be handy, I agree. Also, I was thinking, what if Quantopian, as a rule, just grabbed every algo submitted and ran it indefinitely live, publishing the stats as they do for the contest. In a year or two, we'd have a nice set of benchmark stats for their simulated live trading platform. The contest could still run in parallel with the stats gathering exercise; the algos would just be copied over to the stats accumulator, and users could not stop them.

Grant

-1 to seven of the ten on KQ list, neutral on three.

Intellectual property to remain sacred as part of the brand.
Q can already see csv links accessed and their content.
Opposite strategy pairs easy to spot/disqualify.
Backtests mere appetizer, zero percent in the final score already.
Length of time paper trading rewarded already .
Separate contest for leveraged ETFs.
Disable date functions, perhaps.
Dividend/spin-offs excluded, no opinion.
Dividends applied/deducted, x days after, don't understand.

A lot hinges on the investor timeline and free enterprise.

Key question for Q: What exactly determines the winner?
Solely the highest score?
Or will human minds look at the top x and decide what looks best?
(matching order date/time, order types with charts and see who nailed ideal trades etc)

Edit: Since posting this I had a typing chat with Alisa who provided a quote and I guess the backtest maybe can sometimes count after all in the final score, maybe depending on when the entry was made, not so clear. If I currently don't understand that right then I would have to change my mind on some of those.

Seem Charles took one for the team.
I just wanted to check how much what would the rank be if we took the paper trading score only

user_name         Current Rank   P-Trading only rank  
M. Schäfer               1 1  
Peter Willemsen       28     2  
Jamie Lunn      6   3  
Matthieu Lestel 4   4  
Desmond Ng  19  5  
Simon Thornington   11  6  
Kyu P   2   7  
Antonius teja    5  8  
Gary Hawkins    17  9  
Grant Kiehne     3  10  
long chen   15  11  
Mark Stillings   7  12  
Jordi Villar    9   13  
Ajay H  18  14  
Jesse Pardue    21  15  
Grant Kiehne     8  16  
Gary Hawkins    12  17  
long chen   32  18  
Emil Tarazi 32  19

It seems Schäfer might have a good strategy even without his backtest he would be first in the rank, where others without their Suport good backtesting would drop considerably. Some algorithms might have more Paper trading weight then others since it starts counting on the day they are published

At these timescales I think luck plays a large role. Not to discount any one else's algo, but certainly mine hit a lucky tailwind right after live simulation began.

Luck plays a giant role at these time scales, there is no way to get around it. the current #1 has a backtest sharpe of 2.238 and paper trading of 16.

One metric that I've used in the past is what I call P&L linearity. It's rather like the stability measurement but it uses the P&L after every trade and not the log returns of the equity curve. One of the reasons this works well is that it takes into account the number of trades. One must consider the trading frequency in addition to returns to understand the true profitability of an algorithm. One could simply buy and hold the perfect stock and the contest would never know that you'd only traded once.

Many years ago I formalized this concept with a bit of R code. You can view it here: P&L Gauge for judging linearity of trade returns

As you can see, there is an optimum pace of trading that is important, and a total number of trades too. The bunching of trades is bad as is the sparsity.

Linearity is an attractive metric but does throw out a lot of good ideas with the bathwater. For example, look at an algorithm that takes advantage of fat tails. Instead of picking up nickels in front of a steamroller it drives the steamroller, losing a few nickels every day but picking up a big gain on a regular but random basis. This can be a very sound strategy, but a linearity test would judge it as awful and throw it out. On the flip side, a strategy of picking up nickels would be rewarded by a linearity metric right up to the day you get run over by the steamroller.
IMHO the overriding weakness of algo trading is its susceptibility to fat tails since the very nature of the beast is back-testing against a limited set of data that doesn't include all possibilities. Incentivizing linearity would further drive toward nickel pickup strategies, which seems like the last thing you'd want if running an algorithm based fund.

Fair points. More characteristics to add to the judgement criteria. This particular aspect of linearity does narrow into focus the question of whether this trading or investing? Nimble traders will avoid the squish. Lazy investors will suffer the deep drawdowns. The steam roller presses on but does not get every trader. The steam roller when driven by a drunk, drives in circles, or off a cliff. So generalities are just as misleading for both sides of the argument.

I am a fan of GeometricReturn / MaxDrawdown.

Market Tech,

My two cents is that you may have a valid point regarding frequency of trading. The problem is that if the contest were scaled to a gazillion participants, and a bunch of them just put together random buy-and-hold portfolios, some of those portfolios would be superb (by the law of monkeys on typewriters), even if they didn't apply look-ahead or any other bias. I don't think this is what Quantopian is looking for, but it is not clear how the contest metrics and administration would preclude it. Maybe there could be an "activity" score, so that if you just buy and hold, there is a penalty? This would also help penalize cheaters, since somebody could just set up zipline and run it over a slew of securities until a winner buy-and-hold portfolio pops out.

Grant

HI all,

I do not understand how you get long term sharpe ratio over 3,8, or 10 with execution cost without backtest bias. How it is possible?

Anyone that can actually achieve the sharpe ratios in the top leader-board without over fitting would prob not upload their algorithm since it would be worth a lot.
I think the only way to avoid "Cheating" is to say that the algorithm should take N symbols and trade them on X periods so it could be tested with gold spy apple on a random set of time-period.
It would prob kill some ideas but it would be much easier to be certain the algorithm is really good compared to the current ranking.

Rather simple/silly way to game the contest

keep buying at $0.52ish and keep selling at $8ish ... currently only been running under a day but over time probably be quite a bit ...

https://www.quantopian.com/posts/backtest-issue-appears-as-though-buying-one-position-but-selling-another

Alt text

Put any initialization logic here. The context object will be passed to

the other methods in your algorithm.

def initialize(context):
pass

Will be called on every trade event for the securities you specify.

def handle_data(context, data):
# Implement your algorithm logic here.

# data[sid(X)] holds the trade event data for that security.  
# context.portfolio holds the current portfolio state.  

# Place orders with the order(SID, amount) method.

# TODO: implement your own logic here.  
order(sid(47961), 50)  
order(sid(47961), -context.portfolio.positions[sid(47961)].amount, style=LimitOrder(5))  

p.s. this is running in the contest just to see where it lands on the leaderboard

ok - the test mentioned above is being stopped now since need the third slot for another algo for the contest.

results since sept 16 posted below - Q is aware of the issue and working on (if not already) fixing:

1 annual return

1 Sortino

2 Sharpe

16 Stability

overall though 199

Alt text

Alt text