Backtest and associated algo to share.
Backtest and associated algo to share.
@Andreas,
Good return, leverage under control, thanks for sharing.
Can you describe in simple words how your strategy choose those 3 stocks for context.sec_trade_pool?
setupSP100() consist from current S&P100 constituents and another 65 symbols, how do you choose them?
Most likely there is a bug in setupSP100() list -> both square brackets are red.
@ Vladimir
It's a very simple mean reversion approach. One chooses the stocks that have fallen the strongest. Important is just the time frame. Considered is only the last hour of trading on the previous day. So shares which had the worst performance in the last hour of trading are bought the next day within in the first 5 minutes after opening. The stocks in the universe are simply chosen by liquidity.
Andreas,
Some more questions:
One chooses the stocks that have fallen the strongest.
Why profit2, z, z1, z2, z3, mean = new_sigma(context, profit[si]) are needed?
Is it possible to simplify the process of choosing the most fallen in this way:
hist = data.history(context.sec_pool, "price", context.hist_length, "1m")
profit = (hist.iloc[-1] / hist.iloc[0] - 1.0).dropna()
profit.sort_values(ascending = False, inplace = True)
sec_trade_pool = profit.tail(context.max_trade_secs)
context.sec_trade_pool = sec_trade_pool.index
Oh sorry in the code are still some leftovers from original iterations including long short pairs and the new_sigma function. New_sigma in this case just calculates the sum of log minute returns in the last trading hour and exponentiates it. So yes you can simply use your suggestion.
Here is a cleaner version
Andreas,
one observation that might be worth looking at is that if you remove the 5 stocks that provided the worst performance (negative pnl contributio) from 2010 on then the ago returns double over the same period and dd reduces slightly. (see stocks commented out at the bottom)
This is clearly introducing a forward looking bias so not a valid test but suggests that sometimes for whatever reason some stocks consistently don't respond in the manner anticipated by your logic. Plausibly some sort of tracking system that removes stocks from the universe if they consistently return a negative contribution may improve the outcomes. Will think about how to do this and post it if I sort anything out.
Adding a simple 50/200 TF on the spy reduced the returns but also cut the DD to something more reasonable irrespective of above but there may be a smarter way to do that to achieve a similar outcome.
Attached backtest includes both concepts.
great concept though.
Andreas,
Thank you very much for sharing. Do you live trade this algorithm?
I wanted to use Quantopian's paper trade service to trade this in real life. Unfortunately, Quantopian will no longer offer live paper trading service starting 2020.
Thanks,
Charles
Wow this thing is fast..
@Andreas, have you tried this strategy with other universes like the Q500US or QTradeableStocks? Did you use a specific method when selecting those 100 SIDs or did you only keep top performing ones around?
Well since the strategy requires highly liquid stocks I have just selected a few blue chip names. In practice there would be slippage which stronlgy affects the outcome. So your ability to execute the trades efficiently is key. I don't have that ability.
Hey Andreas - I'm trying to port this over to my automated setup since Q is taking paper trading away. So we can't even get before_trading_start for any calculations. Can you explain what is happening to the profit calculations here:
def new_sigma(context, profit):
lb = 0.0
for i in range(0, len(profit)):
lb = lb + np.log(profit[i] + 1.0)
return np.exp(lb)
New_sigma just calculates the sum of log minute returns and exponentiates it. If len(profits) = 60 this is simply the return of the last trading hour for a stock. Does this help?
Yes, that does help! Thank you.
Do you or anyone know how/where to get this data pre-market now that quantopian doesn't have paper trading?
Andreas,
Here is your original the first algo backtest with commented line 28 only.
28 ### set_slippage(slippage.FixedSlippage(spread=0.00))
Can you comment the results?
Well as I said slippage has a strong effect. The success of this strategy depends on your ability to buy and sell the selected stocks at market opening without significantly affecting the price. So naturally, this strategy has a limited capacity. Using the default slippage model of Quantopian it seems that above 10M $ it starts to deteriorate. I think it's just interesting to see that price movements on shorter time scales as in this case are not as random as the efficient market hypothesis wants you to believe. If I pick the stocks randomly it doesn't work. Professionally I use different approaches which can be scaled for larger portfolios and don't require a daily turnover of 100%.