Hi everyone,
I've recently developed a new strategy, here is the tearsheet of it from 2018-2019.
The algorithm is based on simple mean reversion, but with a twist.
I'd appreciate any comments/suggestions for improvement!
Hi everyone,
I've recently developed a new strategy, here is the tearsheet of it from 2018-2019.
The algorithm is based on simple mean reversion, but with a twist.
I'd appreciate any comments/suggestions for improvement!
Looks quite good to me. With such a short backtest though, how do you know that you haven't just massively overfitted to this time period? That would be my main concern. What was that Feynman quote again? "The first principle is that you must not fool yourself – and you are the easiest person to fool." :)
If this is your OOS test, then I'd say it's quite good, and I'd be interested to know how it compares to your IS trained model.
Hi Jamie,
I'm impressed with your Probability of >58.5% and these stats:
Profit factor $1.72 $1.94 $1.52
Avg. trade net profit $1.35 $1.62 $1.06
Avg. winning trade $5.50 $5.82 $5.19
I'd be keen to see if you have plans to scale it up from a concentrated portfolio of ~9 stocks at >88% daily turnover :o)
Cheers
Hi @Joakim,
This is my OOS backtest, the original IS backtest was 2005-2009 and performed slightly worse. I'm still a little confused about why 2018-2019 proved to be such a good year for the strategy, especially considering how poorly most quant strategies performed in 2018.
Hi @Karl,
I'm going to give it my best shot to scale it up, the problem I'm facing at the moment is that when I expand it to ~40 names, the alpha drops substantially (to around 12%). I think the problem I'm facing is that the signal is really strong for the top names, but quickly loses it's power to overcome slippage and commission the further out we go.
Is this with default slippage and commissions? (If so, wowowow.)
What I find interesting is that Q's short-term reversion returns attribution hovers around zero, though it's safe to assume all your returns come from short-term mean reversion. Seems Q's reversion model is very specific and doesn't catch your implementation at all.
One issue might be the 9:30am rebalance. Real spreads at that hour are likely wider than modeled by Q. Same with the last rebalance of the day -- liquidity is supposedly better near close, due to the close auction being the most liquid time of the day. Lunchtime is supposedly really bad in terms of liquidity, due to all the traders being out to lunch. So I'd anticipate overall better real-world liquidity if you pushed all your rebalances back by an hour.
Hi @Viridian Hawk
This is with default slippage and commission.
That's an interesting concept regarding the time of day I should be rebalancing. I'll give that a shot soon, obviously I don't think that's factored in to Q's model. I've never really thought about rebalancing times, but it does make logical sense!
(Apologies to everyone on the thread btw, just realised I used my alt account to reply higher up)
Yeah, I don't think changing the rebalance times as I suggested will improve backtest performance. (More likely to hurt it, since the market is more efficient when it is more liquid.) But it is more likely to help real-life performance. So it's something Quantopian looks out for when evaluating strategies.
@Viridian Hawk,
Isn’t volume highest at/near the open, (followed by the close)? Also, volatility might be higher at this time, but I’m not sure spreads would be wider, they should be narrower (with lots of volume on each side) if anything, no? So, the fill price modelled by the Q backtester at/near the open and close may be less realistic during these times due to higher volatility, and because Q only has minute bars, not because of wider spreads, no?
@Jamie,
I didn’t realize that you’re holding less than 10 stocks and trading 3 times per day. To me, that makes it a lot less impressive. The Q Risk model is not designed for intraday trading. Also, is it a static or dynamic universe of 10 names you’re trading? Either way, if your universe is mostly small, illiquid stuff, most of your profits may be coming from unrealistic fills. If it doesn’t scale, I’d be quite suspicious. If it’s a static universe, then most likely there’s some selection bias I’d say.
Hi @Joakim,
Yeah, that's why I'm coming to the Q community, I'm struggling to expand the number of stocks it can hold without overcoming the alpha source. If anyone has any neat tricks for doing that, that would be great.
The universe fortunately isn't fixed, it's a combination of a dollar volume filter and a few other things. The algorithm does scale, kind of... it copes with capital up to $1,000,000 then has a fast fall off.
@Joakim, I believe the opening auction has quite some volume. (Q doesn't allow us to participate in the opening auction.) This is followed by significant volatility and wide spreads that die down and tighten up by an hour after open. Stocks on the move typically have wider spreads because there is less confidence about what their "fair value" might be. As soon as both sides reach an equilibrium, spreads tighten and volatility dies down.
On your other point, even though he spreads his rebalance times across three points in the day, his daily turnover is still only around 88%, meaning he holds his positions for an average of 3 days. Seeing as he trades 367 different positions, it's unlikely to be a static universe.
@VH,
Well, I disagree on the ‘wide spread’ part. Higher volume = narrower spread in my experience. And vice versa, low volume = wider spread. This is why you have market makers and liquidity providers.
@Jamie,
Are you filtering for high dollar volume, or low dollar volume. If you’re filtering for low dollar volume, again I’d say your profits are more likely coming from unrealistic fills rather than actual alpha. Especially if it doesn’t scale well both in terms of number of stocks trading and capital deployed.
@Joakim,
I've got a dollar volume floor, below which I filter out. I probably agree that the profit is coming from unrealistic fills though.
@Joakim, Quantopian discourages trading near to the open because they say the spreads are wider. That's their position. They encourage algorithms to rebalance one hour after open, since this provides the best liquidity and smallest spreads. Again, I'm just repeating what I've read many times on these forums written by Quantopian staff. It does align with my observations of spreads, that they typically increase with volatility/movement.
Ok fair enough, I'll take your words for it. I should probably change my algos to rebalance 30-60 min after the open instead then.
If anyone knows what's causing spreads to be wide at/soon after the open, even when volume is high, I'd be all ears (since obviously I don't understand it). If you're a 'taker' why would you cross a wide spread rather than be the best bid or offer instead? With all the HFTs and algo trading these days, I would have thought price discovery would be quicker and more efficient.
Maybe it has something to do with the high amound of VWAP orders needing to get filled around the open, and HFTs knowing this, want the spread to remain as wide as possible? I don't know, I'm just guessing.
He must have starting capital of $1,000 to make 140.3 % annual return or $1,406.57 in total dollar profits trading 3 times a day with average holding of 9 stocks that have low liquidity. What more can I say but scale, scale and scale! Shock and awe!
A few facts of the strategy:
To prove my point that the algorithm scales, I've attached a screenshot of a backtest with $1,000,000. Couldn't be bothered to wait the several hours for the backtest to run fully :P
Thanks for sharing, Jamie!
I have similar ones that return really well for 10~30 stocks trading daily, alternate daily or weekly - keep those gems as they are very "boutique" - as you should too!
Perhaps try this for fun:
f1 = my_alpha1_zscore.to_frame('factor') # Say.. your "mean reversion" signal for ~10 stocks
f2 = my_alpha2_zscore.to_frame('factor') # Say.. "complementary" signal that works when f1 alpha drops off
f1L, f1S = len(f1[f1.factor > 0]), len(f1[f1.factor < 0]) # Your long/short ~10 stocks
f2L, f2S = len(f2[f2.factor > 0]) - f1L, len(f2[f2.factor < 0]) - f1S # Your complementary 10++ stocks
fx = f2[~f2.index.isin(f1.index)].dropna()
fxLS = pd.concat([fx.nlargest(f2L, 'factor'), fx.nsmallest(f2S, 'factor')], axis=0)
# Extended set that fills in when your mean-reversion alpha peters off:
alphas = pd.DataFrame(pd.concat([f1, fxLS], axis=0))
Have fun :o)
@Jamie,
Couldn't be bothered to wait the several hours for the backtest to run fully :P
No need to stick out your tongue...patience is a virtue specially in research. We are just trying to help or give feedback and suggestions because you asked for it. I know this is a variation of an algo you shared in another thread that you deleted and where I commented that your trading logic is sound except for what I believe was a disconnect between between beta which is calculated on daily frequency and your trade triggers on minute frequency. Hopefully, this algo was adjusted for that. Your top algo shown here with $1,000 capital would probably do well with a Robinhood trading account. But if you are looking for fund allocation, I suggest you follow the guidance suggested by Q using the new backtest and passing its constraints and threshold. As you've said above, your alpha goes down to 12% when you increase trading to ~40 stocks , nothing wrong with that as long as you have good Sharpe ratio.
Dude, I was making a joke... it's hard to intercept a conversation on the forum if you're having to wait for something to run for hours. Chill. I was going to share the full notebook once it finished.
I appreciate everyone who contributed meaningfully :D
This actually isn't a variation on the algorithm I shared in the forums. However, I still don't agree with you regarding beta on the daily vs beta intraday, perhaps you can elaborate? I'm still convinced that on average, the numbers will be roughly the same.