Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
long/short OLMAR hack

Here's a hack, based on the OLMAR algorithm (see http://arxiv.org/ftp/arxiv/papers/1206/1206.4626.pdf). It is long/short large-cap NASDAQ stocks, with a position in QQQ to neutralize beta, if necessary.

27 responses

Here's one with QQQ replaced by SPY. Looks a bit better, but it could just be fluke.

Looks great, Grant. Still enjoy the OLMAR algorithm. I computed a tear-sheet for the second one which looks quite good indeed.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Returns show ~40%, twice the benchmark. Profit vs Risk is higher around 50% since it didn't risk the entire starting capital value.

This algo has low beta, low volatility high stability and may be the winner of 6 month contest but, after running backtest on full market cycle I am not as optimistic as Tomas and garyha about performance of this algo especially in period 2009-2013.

Thanks Vladimir,

I've seen the same thing. It could be a matter of parameters being tuned to optimize recent performance.

Grant

Vladimir, I'm not optimistic nor pessimistic about this algo, didn't mean to imply that. Although, having thought about it, I'm pessimistic about OLMAR in general, because the winning code in the first contest for example (was olmar) had a -39% drawdown in 2008, drastic (when I tested it using a start of early '07 or something).

No, my point was, there's another metric one can use, and it can be advantageous to everyone. Called (for now) PvR_Ret in the custom chart below, standing for Profit versus Risk (maximum amount actually laid on the table in exchange for stocks). It would also be an advantage in making your point,to you , because, as you can see below, when returns are negative, PvR_Ret is ~60% more negative at times. Also when positive, it is more positive. Both are because risk was not as high as starting capital. It can add/help to see more clearly during development.

Tim Berners-Lee talked about how frustrating it was in the early days trying to promote the World Wide Web he invented. People are automatically resistant to new. This is like that.

@ Thomas,

I'm not quite sure how to pose the questions, but is there some way to sort out if the performance is "real" and not due to chance, overfitting, whatever? It seems that unless one can pull together a story that near-term performance is understood and will persist, then attracting capital will be difficult. Or even if an explanation can't be provided for recent performance, then might there be a way to minimize risk by detecting a deviation from the trend?

@Grant,

That's a very important and timely question which I'm currently working at. Our current way of thinking is to place most emphasis on the paper or real-money track-record of a strategy and evaluate whether it matches the backtest. So if you have something that you think would be fund-worthy definitely set it to paper-trading (but we can also see when algorithm code was most recently edited so we do have a pretty good handle on which data was available at the time of algorithm writing). Unfortunately, as that data only accumulates at the speed of time, there are opposing goals of wanting to deploy capital sooner but also to have sufficient out-of-sample (OOS) so as not be fooled by randomness.

That's where statistics comes into play and where a lot of the work on pyfolio ties in. Currently there are 3 methods that can be used for this:
* the linear cone (this is the first plot of the returns tear-sheet): you can look at whether the OOS period leaves the 1SD or 2SD cone and deviate from expectations.
* the Bayesian cone (see http://blog.quantopian.com/bayesian-cone/ and http://quantopian.github.io/pyfolio/bayesian/)
* the BEST model (a Bayesian T-Test essentially): This just compares the in-sample Sharpe ratio to the OOS Sharpe ratio in a Bayesian way. See here for the original paper: http://www.indiana.edu/~kruschke/BEST/

So what I would do in this instance is to paper-trade this algorithm (the longer the better, at least 1 Month) and look at the cone plot in the tear-sheet. Writing this I realize that it's not yet easy on research to link a backtest to a paper-trading algorithm in order to create the cone. The get_live_results() is a first step and you could link that manually but we will make this more user-friendly.

See also slides from a recent talk that ties this together: https://docs.google.com/presentation/d/1rHFHla_I6teK5A-c8jglRiRR9atnGycdrxUYUqjKBq8/pub?start=false&loop=false&delayms=3000&slide=id.gcc36f9863_0_5

What are your thoughts on this?

Thanks Thomas,

No deep thoughts yet. It seems from an investor standpoint, it is a matter of "How much money should I put toward this, and how will I know when to add more or to pull out?" It's also a matter of expectations. For a money market fund and certainly for a bank CD, if you hear that you've lost $5 of capital, it is not a good sign. On the other hand, one sorta expects to lose when buying a lottery ticket.

One potential problem is, will investors care if a dramatic deviation is to their benefit? So do your stats capture this? Or are you just looking at deviations from expectations relative to the prior trend, regardless of direction?

I've attached another version. The leverage varies between 1 and 2 (it can be tweaked by adjusting context.leverage = 1.0). Basically, if the algo is neutral without SPY, the leverage is 1. If it is all long or short stocks, then the leverage is 2, since SPY will be short or long, to neutralize beta. This may make no sense, but I'd coded it this way originally and just let it fly.

Note set_commission(commission.PerTrade(cost=0.0)) in the attached algo.

Thomas - is the bayesian cone with beta/APT/FF factors part of pyfolio? That looks pretty neat.

Hi Thomas,

Do you have access to any of the individuals looking to fund Quantopian algos (presumably VCs at this point)? Presumably, they have a pretty good idea of what they'd like to see before opening their wallets. In the end, that's all that really matters, otherwise we are just doing an academic exercise. Have you gotten any feedback from them on any of this?

Grant

Simon: I'm glad you like it. I think it's a much more useful model than the Bayesian cone that does not take market correlations into account. It's possible. I will try to come up with an example.

Grant: Sure, we have great advisors and have a pretty good handle on what they want, but how to get there is the question and something we will have to solve :).

Hi Thomas,

I guess I'm still trying to understand how I go from what I've presented above to capital from one or more of your "advisors" (presumably people with money or access to money). You said "So what I would do in this instance is to paper-trade this algorithm (the longer the better, at least 1 Month) and look at the cone plot in the tear-sheet." So, then what? You say "we have great advisors and have a pretty good handle on what they want" so I need to know how I get to the point that I am able to give them what they want.

I have no problem continuing to iterate on this, but there needs to be a reasonable shot at getting some money. Have your advisors given you anything definite like "Show us A, B, C, & D, and we'll put up some money" or are you still trying to work that out?

On a more technical note, it seems that with the cone business, you are trying to determine if the algo is generating returns in a consistent fashion. However, isn't there a difference in upside versus downside? If paper trading or real-money performance is better than backtesting would suggest, then shouldn't it be treated differently than if the algo tanks (unless the game plan is to pull real money out of algos that do exceptionally well relative to their past performance, to lock in gains)?

Thomas,

Also, I would think a little harder about the greatness of your advisors. The whole Q fund/contest effort has been pretty rough, it seems. The fact that they let you launch with long-only algos, correlated to the market, unhedged, etc. would suggest that either they weren't tuned in, or they didn't know what they wanted. And it has been a year since you announced the fund concept, and you've yet to get any real capital deployed, which doesn't seem like a resounding success. And they've sunk a lot of capital into Q, without any revenue yet. At the risk of sounding like a pessimist, the data would suggest that your advisors might not be VC rock stars.

Of course, they are paying you, so you are supposed to say that they are great.

Grant

Grant: What we're looking for is actually pretty simple:
* A strategy with a reasonable backtest without crazy drawdowns, that it is beta-hedged, doesn't have crazy concentration risk and is long/short (more specifics can be found in Jess' webinar: https://www.youtube.com/watch?v=-VmZAlBWUko).
* A good OOS period over a few months.

I really recommend the webinar and looking at your algorithms using pyfolio as that obviates certain issues that are easy to miss otherwise.

Does it have to be long short? I've been looking into different ways to lower beta and volatility without going short there are many different ways to do that. It seems so limiting that you eliminate entire groups of strategies without individual consideration given to the strategy.

Thanks Thomas,

Ha! Just starting listening. Guess that's what I get for posting! If only I could get paid for writing bad algos...

Grant

hahaha grant!
I would be a millionaire if that was the case :)!
best
Andrew

Spencer: A valid question. My take is that long/short is a quick way for us to trim down the uninteresting strategies but my opinion is that there for sure can be beta-neutral strategies that are long-only and still interesting for the fund.

Grant: :)

What we're looking for is actually pretty simple

Hi Thomas,

As the saying goes, the proof is in the pudding. Has anyone at Q managed to write an algo that passes muster? Or a Q user? Without 5 or so unique, viable, published examples, to take us out of the realm of dreams, with real money allocated, it is hard to tell what you are looking for. You can say anything, but until you actually put up the money with full disclosure, it is just words. At one point, I think Jess said that even 7% annual returns at 0.7 Sharpe would be o.k. So, I'd hope that you could whip something together that meets your minimum criteria, put $1M or so toward it, and get it rolling as an example that you mean business. Or are you hoping for something more spectacular?

Grant

Hi Grant,

I agree with everything you wrote and that's indeed what we're doing. We'll share more info as it becomes available.

Thomas

Here's another variant, in case somebody wants to play around with it. Looks sorta decent. --Grant

Longer-term, doesn't look like much. I haven't been able to ever find anything useful signal-wise from moving averages....

Hello all. I'm back for a shot of nostalgia.

The tear sheet introspection into a strategy's efficacy looks to be truly useful. There is one aspect however, which looks like it's missing. The Ulcer Index, or underwater measurement seems to not represent what I consider a more important measurement. Namely the Surrender or Give-back timespan. This duration measurement is where one finds that one's portfolio has fallen to some historic level indicating that from that point to this you've earned exactly nothing on your money.

The underwater metric seems to measure from peak through a valley then up through the highwater mark. And is useful I suppose. But this other measurement examines the timespans between portfolio highs and the return to those highs -- no matter what has occurred between those points. This timespan, to me, represents the strategy's actual profit retention capability. The duration for this measurement, over the life of a back test, is the true test of whether a human would put up with a strategy and keep it in the market. If you knew that your strategy's portfolio value returned to prior absolute high after a year or so you'd say to yourself, "what the hell! Criminy, I'm back where I started. Forget this thing, I'm outta here." If the back test did that over and over would any trader have the stomach to keep their money in that strategy? Of course not.

What is the psychological maximum for this number? For investors who have a generational time horizon, that number might be years. For traders looking to make a buck every month or three, a six month surrender maximum would be too much.

[Red boxes are the obvious surrender timespans.]

Red boxes == dead money

Hi Simon,

Yeah, I haven't been able to tweak up this strategy for a 2-year timeframe, and then get a decent long-term backtest. Doesn't really matter to me at this point. It would seem that if I have a decent 2-year backtest and then 6 months or more of out-of-sample paper trading that looks good, it might be enough, but who knows. I've submitted some algos to the contest and they are doing well so far, but they could turn out to be turds or maybe I'll win again and get rejected for whatever reason, or get funded and fall flat. All fun.

Grant

This is probably more of a rant than I'm used to participating in...buuut....

Is a ten year back test even feasible?...it doesn't pass my sniff test.
Is there any strategy that will EVER pass that filter...a single timeseries measurement taking into account all
market forces and just winning....sounds too Issac-Asimov-ian to me!

I can see modeling events that happen in that ten year frame, and accounting for them with a strategy
that takes care of those events. To me that's learned risk management...pyfolio lists some of those events...

So a two-year backtest that just wins...along with a boatload of learned-risk management add-ons,
which the contest doesn't take into account...unless you happen to hit one of those events...
sounds like the way I'll go...guess I'm more on Grant's side on this than Quantopian's...
alan

Alan,

I think that's the challenge for Q. Let's say I give them a 2-year backtest that looks decent, and then it is followed by N months of out-of-sample consistent returns. It ought to get some capital, but how much? And when to back off and reduce the allocation? That's the whole crowd-source concept in my mind. At this point, my sense is that they are looking for a handful of institutional-grade uber-algos, but in the long run they need hundreds/thousands of algos from the crowd to have something unique.

Grant