Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
What can I conclude from the Winners live performance ?

Hey ,

I am just wondering why the Winners' portfolios are not performing well. Should I take it as a noise ?

16 responses

A big component of my under-performance is due to short loan fees, which are not simulated by Quantopian during back-testing or live trading. Can't speak for the others'.

@Simon
Thanks I really appreciate your response.

Also want to thank Simon for that revelation.
I hope someone/anyone will post some real loan fee details and then maybe folks can come up with some sort of function we can use to model shorting fees.

The SLB tool in Interactive Brokers seems to be a pretty accurate gauge of the actual borrow costs. If you are short $100k of something with a borrow rate of 4%, you can expect ~$340 cash to be withdrawn every month. If you leverage that to $300k for instance, that's about $1000 withdrawn every month which doesn't show up in backtests. In other words, you need to deduct 12-13% annually from that strategy to account for the leveraged borrow costs.

For an example of the damage, here's some live paper trading I've been doing on an un-leveraged strategy which shorts triple-leveraged ETFs and rebalances, similar to what some folks have been talking about lately.

http://imgur.com/2XxwOdm

The first is Q paper trading, the second is IB paper trading, which has the simulated borrow costs. I'm not saying these strategies are worthless, but you definitely can't ignore the costs.

My winning algo has a beta of 1.05 (current value from Quantopian dashboard). It is long-only, with a fixed universe of 18 securities. I haven't done much to analyze its behavior (although now that get_backtest() is available in the research platform, I could get a better feel for things). My sense is that initially I got whacked by having a big position in one stock, and having it tank. Lately, I seem to be closing the gap. Three more months!

Once the contest is over, I may just publish the whole thing (and prior to that, I may publish some screen shots of the performance and stats). In the end, it could be a nice case study for Quantopian to look at all of the data presumably they and IB are storing up. With the new low beta rule, the algo is basically worthless for the hedge fund, so we might as well get something out of it.

I'm hoping to make a little money, but if not, I still want my picture taken with Fawce and a giant check for negative dollars made out to Quantopian!

As far as what to conclude, I think that Quantopian has more to learn about how to pick winners that will make money consistently over a relatively short period of time.

For me, it is long only as well with fixed universe and I expect the beta to be in the .5 range so wouldn't pass the test now so I am still learning. :)

my conclusion, which isnt really new: 1 month out of sample + 2 years in sample are not enough to predict algo's short term future return, with emphasis on the out of sample period. I suggest gradually increase this as time goes by, maybe increase minimum out of sample by 1 month for every 2 months until it reaches 3 months. New entries will have to wait longer, but I think it would be worth the wait.

@Simon: how do you do paper trade with IB linked to Q?

If you have an interactive brokers account, they'll give you a free paper trading account, you sign up online if I recall . It's got a separate account number, and you can just connect Quantopian to it seamlessly.

thanks, i did not know that the paper account got separate account number. I will try it out.

I think the backtesting should always be an underestimation of the live trading performance. Q is probably an over-estimation.
I think this is really important for the platform. The simulated algo can under perform the same algo with real-money but not over perform. I hope quantopian has already done some tests with an algorithm running live and one in backtest and checking where the differences are.
It could also be simple over fitting. Actually the results put me off a little bit. I think the actual ranking system favor those algorithms and I will be surprised if winners make some good money. Specially with the correction coming soon.

It is hard to tell if we are just seeing a work-in-progress, or something else. From the get-go, the contest and the Q fund were meant to be aligned. Get algos for the fund by running a contest. The fund was announced late last year. And presumably, the market for the conceived Q fund product was understood--pure alpha would be the way to go to capture large institutional investors. It seems things have been migrating in that direction, but it certainly wasn't the focus when the contest got kicked off. I'm wondering if this was intentional, not to make it too hard, cutting off too many strategies at the outset? Or maybe a bet that an algo like mine would really take off, and be a great marketing tool? Or maybe it was just a big 'oops' and someone at Q is in the hot seat? Kind of an odd evolution. Normally it would be "What does our market want? How can we supply it?" Maybe it wasn't firm in the minds of the Q that pure alpha was required when they conceived of the hedge fund?

The Q team and their advisors are not unstupid, so my hunch is that we may be seeing an intentional ratcheting up of requirements. Start with an open jar of honey. Marketing 101. Gotta get 'em in the door first.

Nearly every startup iterates toward a solution, usually in an almost completely random zig-zag with a lot of dead ends, so that's probably a more realistic explanation than a desire to get people in the door and then change the goalposts. Especially since the initial Q team had almost zero finance industry experience and lots of technical folks, something they've been recently balancing. In my observations smart people tend to underestimate what they don't know about fields outside their expertise, which explains alot about the non-linear nature of any startup that's trying to disrupt an established industry.
Also, I'd have to respectfully disagree about a real algo never outperforming a simulated one. The slippage model is never going to be anything more than a very approximate model of the real world, and will almost always overestimate slippage or stop you from trading in the minute you want to trade. As I've mentioned before, only the S&P 100 stocks trade with the kind of volume necessary to make the current or proposed slippage model even approximate the truth. The other 4,900 trade with little volume and big bid/ask spreads, but in reality have a deep enough book that they are able to take a relatively big order just a couple cents off the mid at any time without impacting the price. Any algo that trades non S&P 100 stocks with any kind of frequency isn't even going to be entered in the Quantopian contest, and algo writers who work with those algos won't stay with Quantopian after they see the results of a couple of test runs, leaving a pretty big survivor bias that would make one think that infrequently traded or S&P 100 restricted strategies must be best.

Well, it may be true that "smart people tend to underestimate what they don't know about fields outside their expertise" but I gather that the understanding of a 'pure alpha' product being needed for the institutional market was understood all along, no?. Justin's fairly recent post describing the institutional hedge fund market was news to me (see https://www.quantopian.com/posts/june-contest-rules-update-its-all-about-that-beta-star). Maybe it was a surprise that hedged/low-beta strategies didn't bubble up in the first rounds of the contest and start printing money? At QuantCon, a couple of Q employees expressed some surprise/concern that my algo appeared to be long-only (which it is). I guess it was starting to sink in that some adjustments needed to be made. Zig-zag, as you say.

Have to hand it to quantopion's radically open policy. I downloaded the live trading performance of the contest winners here. I had to do in excel, but I modeled out a portfolio in which 100,000 is placed in each winners bucket, and the returns tracked forward from 3/2/15. As of 3/16/16, the model shows a decline of 3%. Only Michael Van Kleek and Pravin Bezwada are profitable. Over the same stretch of time the SPY is down just 0.7%, and paid dividends. So as a 'prop trading firm,' Quantopian is not doing well. Good thing they are not talking about taking outside capital.

I keep thinking that Quantopian is falling victim to the Coin Flippers problem. If you run a contest with thousands of entries, by chance someone's algorithm (read: coin flipping strategy) will be just the right strategy for the market for that month. The person will be anointed, but going forward they are just facing the random walk like everybody else. Leaving myself a note to check again in one year.

I wouldn't read too much into the contest results, and keep in mind that for the Q fund, they can select algos that don't win or that haven't even been entered into the contest. They just announced funding some strategies for the Q fund, so some elements of this crowd-sourcing scheme must be working. Also, note that they'll just be offering cash prizes for the contest, instead of putting $1M toward the algos (the capital will go toward their fund, I gather).