Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
The SPY who loved WVF (or Just another Volatility SPY strategy)

Yes I cannot help myself: we must be able to conquer this beast called Vola

Economic thesis, loosely defined:
1) there is information in the synthetic VIX versus the real VIX or VXX,
2) the Volatility of the VXX says something about the underlying and therefore the real market (Kory)
3) by assigning a low amount of capital to a high volatility strategy you prevent being taken to the cleaners but profit from upside (Alvarez)

Thanks to Kory and Cezar Alvarez, I present a strategy that one could trade, but looking at the tear sheet and the equity curve I suspect that this strategy is its current form and the current market does not perform.

For equity curve jockeys: none of the parameters are optimised, the use of cash is not optimised
For Contest Jockeys: easy to isolate the Alpha with hedging, I can't be bothered with the contests

Further dev:
Alvarez has some more "anti-cleaners" strategies that I didn't implement, one could test them out
Pull in VIX and see whether the VIX and WVF can play nicely

Disclaimer:
I'm not trading this strategy, needs more time and trading guards and I would wait to see whether the strategy is going back to older levels of volatility as the DD in the last 2 years is double from the years before.

266 responses

Hi Peter,
Just have to say your previous volatillity algo was so precious for me! I have implemented it in IB and it was great!
Why you think this isn't compatible for LIVE? The rationale I see here is very similar to JAVOLS..

you could trade it, I haven't put enough time in it to say it's robust. I thought I release it to the community and see how others can improve it. The things that I see in the tear sheet looks like the behavior as changed from 2014 to 2016. There is a big difference in returns and beta and that worries me a bit. With a few more guard rails and cash usage it could be traded, but be careful...

Peter, i borrowed your codes and put this together. I got this. It is looking good. So wanted to give it a run in IB. I so new to Python and Q so hope you dont mind .

Here the TearSheat for it.

The equity curve of the modified algo looks really nice. What are the main logic differences between the modified and original algos?

looks good indeed, I'll play with it and see how I can break it ;)

Peter, from my observation as the same as the note you put on top of MVP algo. As the algo MVP, Return is a function of look back period (1-20 ) and eps_factor. i was able to do an optimize these parameters( every 2 month period) in AmiBroker and it delivers good result. Im not sure how can i do it in Python(since im so new as this). Also here is the version that aggressive tilt and using your Vol ago to control the portion of xiv and increase or decrease portion of spy and edv..

Wow, this is awesome. Thank you Peter for starting this thread.

Cesar and I actually live near each other and I recently presented my volatility trading strategies at a Northwest Traders & Technical Analysts meeting (Cesar attended as well). I've shared my volatility strategies with Cesar and we're both working on improving them. Here's our Meetup group if anyone in the Seattle area is interested in joining: http://www.meetup.com/Northwest-Traders-and-Technical-Analysts-NWTTA/

Please message or email me in private if you would like to collaborate with me on coding volatility strategies. I'm a TradeStation user and all my stuff are coded in TradeStation's EasyLanguage. I would love to have them ported over to Python for Quantopian.

Currently I have 5 different strategies that strictly trade VXX and XIV:
- Three of them are long XIV only (50-80% CAGR and 15-35% Max DD)
- One is short VXX only, similar to the ones presented here but has more variables (60% CAGR and 15% Max DD; http://imgur.com/a/w4blx)
- One is long VXX only (15% CAGR and 10% Max DD)

I have everything coded in EasyLanguage so it would be pretty easy for a programmer to port everything over. Unfortunately I'm not a programmer.

Email: [email protected]

Only issue is that the last ago does have some leverage. Any way to ensure there is no negative cash - for Robinhood trading?

A small remark , if you want to trade the last ago live, this line should be changed :

:135: FutureWarning: pd.rolling_max is deprecated for Series
and will be removed in a future version, replace with
Series.rolling(window=28,center=False).max()

not plug and play - you need to define the series.

Is there any way to get help with these changes? Like examples?

For anyone considering live trading NHAT NGUYEN's algo, keep in mind that it's highly dependent on day of the week, this is a form of overfitting and there's no guarantee that it's performance in the future will be as good.

@ Luke, you are right "there's no guarantee that it's performance in the future will be as good". Im using the idea from Grant's algo (which is MVP ) and do a conversion to lite C (Zorro platform ) where i can do optimization and walk forward analysis, Amibroker gave almost the same volatility but about 2% less returns. These are best parameters which deliver the highest return. i did 10 WFA cycles so i think it is stable parameters for next couple of years, algo will perform closely but past is not the future where it is a unknown . For live trading, we may need some nice Python expert in the forum to double iron some bugs in order management, keep track trades, and money management part. Or lower the allocation and watch it like a hawk for couple of months.

Just a word of caution: the algos in this thread so far all use a fixed value for the WVF trigger. That might potentially be dangerous in the future. I don't use a fixed trigger in my VXX/XIV algo but I use a variable trigger that's an EMA of the WVF.

Kory, that's a great idea. If you don't mind, could you share a code-snippet showing how you get the EMA of the WVF? If I tried I think could get a value but it wouldn't be a true historical EMA, only an EMA of the WVF as far as it shows up in my algo using .append(WVF). This wouldn't work well for live trading until after some time had passed

Here is the correction of the depretiation warning line #135
:135: FutureWarning: pd.rolling_max is deprecated for Series and will be removed in a future version, replace with
Series.rolling(window=28,center=False).max()

I would like to say it seems no body has done a back testing which covers the time period 2008 where there was big crash bcz. Lehmann. If you do a back testing covers this period, you will find the return is quite different.

Some of the ETFs used in this algo have no data before 2011. This could be the reason why no back testing did which covers the year of 2008.

very insightful Mr Chang

Given "Some of the ETFs used in this algo have no data before 2011", therefore, "no body was able to do a back testing which covers the time period 2008".
This is already True statement. What are you saying ? ......

Maybe - put another way - the concern is that the specifics of this strategy dictate that the backtesting and subsequent walk-forward analysis have only been done in the relatively known and predictable post-2011 market conditions. That certainly is a concern I would have prior to launching a long-term based strategy - how would it have performed during a major market turndown when correlations change dramatically between the ETFs it uses?

My thought? If you need absolute certainty, the stock market isn't a good place to play.

My first objective was reduce the volatility of over all portfollio when i combined these 2 strategies and it accomplished.
Second objective, was optimized and get the backtest results so i can have a relative reference point of results so i can work with when moving forwad ( that was the stage that i publish the backtest).
Third objective, given objective 1 and 2 met and return slightly above benchmark, can i add another strategy to reduce further beta, increase alpha and reduce volatility ( This is the current state that im in )
Fourth objective, if 1,2,3 met, moving to asset allocation models selection and various optimization techniques (parameters, time reference point etc) for over all portfolio and all the strategies combined.

That is usually, my route of designing a new portfolio.

My questions to you is ? Why do you want to put your money at risk when you are clearly not putting enough time to understand how things work.
"This game is long, hard and scary. Risk + Uncertainty = Probability (pulling your hairs off + lack of sleep )= 95%

@Eric:

A absolute certainty can't you never get. Maybe the god could give you . :-)

But what's the sense you do back testing?

@Nguyen,

You wrote a lot here. Why not simply back test your algo in bear phase? For example include the year 2008 and 2009?

In 2008 there was a big crash bcz. Lehmann. After that, exactly to say since 2011 the market goes up continuously. It's a bull market. Why not back test your algo in a bear market?

@Thomas

Everyone trades differently. Different strategies. Different time frames. I'm a very (very) short term trader, so if it works this year, it's good enough for me to use. I don't hold any positions overnight.

I'm aware that because of this my opinion in the matter is moot to most if not all of you guys in the forum because you trade much longer time frames. But since I was hanging around I just threw my hat into the ring.

Also, asking why no one backtests this prior to 2011 almost seems like a rhetorical question, as some of the components didn't exist before then. This too, is moot.

This is Peter's thread so i dont want to post something which doesnt add any value here. And this is will be my last post here.

For Mr, Chang.
First, These are not mine algos, i saw some great things in Grant and Peter' algo so i try to build a something out of it to see how much volatility it can reduce.
Second. I think your logic does have serious problems. Lets me point it out for you.
2 of your posts are have the same question ? and both your questions are already answered by your self.
You said " Some of the ETFs used in this algo have no data before 2011".This is True statement because " XIV and TMF " intercepted date is 2009 and 2011, and this is your Conclusion: "no body was able to do a back testing which covers the time period 2008".
So your declaring statemnt is TRUE .< =>your conclusion is also TRUE. ( You see Mr Chang, your opening statement and your conclusion can go both ways (backward and forward).
Is it right Mr Cheng ? Can you see it. ?
Good luck

Could someone explain to me the basis of the algo, i am new to quant and would like to understand what the parameters are and the rationale behind this algo.

Sorry for being a NOOB

Here is my understanding of this strategy:
Function “Allocate” - The strategy allocates between 5 stocks SSO (2x S&P), TMF (3x 20-year treasury), IJR (small caps), IJH (mid cap) and XIV (inverse volatility) based on highest momentum for the last 6,630 minutes. Not sure why this number used?
Function “allocVol: Closes XIV position if WVF index (based on VXX) crosses below 14. But does nothing if the WVF under 14.
Function “allocSPY”: Opens SSO position if WVF spy uptrending and closes SSO position and opens EDV (long term treasuries) position if SPY in downtrend.

Also, if someone considers trading this strategy , i would consider replacing EDV with some Short S&P ETF (e.g. SDS) and TMF with TLT (due to lower spread and higher liquidity). Because the algo over-allocates to the treasuries, and the reliance on treasuries as safe haven asset that might be wrong during the tightening economic cycle

Here is the version that uses Short S&P ETF (e.g. SDS) instead of EDV and TLT instead of TMF. It should be a safer bet in case of the bonds bear market

Maxim, that is a good idea. I can wait to play with this idea. im working on my v3 of this algo and trying to put pieces together in C. It is quite interesting and it looks promising by showing low draw-down with high Sharp. Here the idea for you to play, get 1 more strategy that playing stock market like Value investing or momentum that re-balance monthly, then allocate about 40%, Keep the MV algo, which is driving the rest of allocation and also driving the whole portfolio when stock market in trouble.
if market in trouble, put all the allocation of the stock strategy to 1 of safe haven assets and lets the MV algo allocate it weekly until the stock market is trade able again.
My idea is having a diversify portfolio of strategies (between 2 and 4) with continuous allocation between assets. Safer and better returns.
Have fun and good luck.

Very good improvement indeed, let me play with it further as well!

PB

Hi Maxim,

It's realy well done!

But a question: After I run the back testing I found in "Transaction Details" that it happens very often that an ETF is bought but several hours later (at the same day) it is sold again. Is this what you want?

Cheers

Thank you guys, I am really glad that I could make a little contribution to this thread.
@Nguyen. Thank you for your suggestion on how to combine the strategies and use the MVP algo to drive the allocations between them. I will be investigating it…
@Thomas, you are right that the algo can open and close the positions in the same ETFs during the day.
It happens, because there are 2 different processes (functions) used to place the orders for SPY (SSO and SDS) and XIV :
1. @ 10.30 AM - “allocate” function (based on scipy.optimize.minimize function) allocates capital to SSO and XIV positions
2. @15.45 - “allocVol” function closes VIX position if it WVF crosses 14 (based on VXX)
3. @15.45 - “allocSpy” function opens/closes SSO or SDS based on WVF (based on SPY)
In my view, this is a weakness because it leads to overtrading and confusing signals. The possible improvements are: combining the 2 signals (WVF and regression/optimization) or using only one of two signals for signal generation.

Would this algo still be valid if i were to set long_only as my trading settings?

@ Seungmin. Should work for long only, it doesn't short anything but using Short ETF instead.

My contribution:
Change the TMF to the TLT only and get even better!!!

Returns
494.2%
Alpha
0.28
Beta
0.31
Sharpe
2.00
Drawdown
-10.8%

@Michele: Your contribution to Peter's original code or to Maxim's? I found both of them haven't used TMF but TLT. Or maybe you want to write "change the TLT to TMF"? :-)

Cheers

I may apologize for my poor english..
I mean, instead to change both etf EDV and TMF, change TMF with TLT only in the last Maxim's algo and you get sharpe=2.

The issue still remains that there is a fixed value for the WVF trigger - using the best # historically from 2011 forward. Seems like overfitting as this number can change walking forward.

I could change, but remember volatility is not a stock price or so, if the volatility structure changes permanently, othe very base assumptions in the amrket must have changed as well. There could be periods where we have elevated or depressed volatility, but a high vix/wvf value to revert from is pretty safe in my mind. I haven't found a adaptive measure that improves the algo.

First thanks to Peter & Nguyen and everyone else for sharing your ideas. I loved the performance & have spent some time experimenting. Here's my experimentation observations:

  • In a similar to 2008 scenario, the algo will probably lose money, you just have to stay alert. I tried to simulate this by trading a portfolio that didn't include XIV, and had TLT instead of TMF (poor man's simulation) and that portfolio was losing big in 2008. Nevertheless, I think omitting XIV for that period would not materially impact the simulation validity. Next, I built a bullish market filter that only invested when SPY was above of either 100 or 200 day SMA. It protected against the 2008 market but it was impacting the total performance. Even when I was running the full (with XIV & TMF ETFs) version past 2011, the overall performance was being impacted and the 2016 YTD perf was terrible. So, I decided against it & removed the SPY bullish filter.
  • The allocSPY function in some algo versions uses EDV and some versions uses SDS. In my experimentations it appeared that using EDV in momentum markets performs better and using SH in sideways markets performs better. Currently, I am using a linear regression filter to decide when the market is in momentum or is choppy. Momentum means that the R square of a linear regression of 126 days of SPY is above 70%. 70% is a result of instinct and very limited experimentation.
  • I chose SH instead of SDS for the allocSPY because it appears that the algo is in SDS for long periods & SDS has terrible decay. It appears that it is less offen in SSO, so I didn't spend any time with that.
  • I am allocating only 20% to the allocSPY scheme leaving 10% in cash. In backtesting, the code seems to be going into leverage above 1.0 resulting from partial executions. It would perform better with the original 30% allocation, but as a first step I want to make sure I don't go into margin and this seemed the easiest way programmatically to do it. I also moved the spyAlloc 15 minutes earlier to have more time for execution.
  • I added a filter for the daily allocSPY rebalance to only execute trades when they exceed a threshold. In the included test, it is set to $500. This removes a lot of small 1s and 2 shares rebalance transactions for a small capital test.

I would be very interested if anyone has any analysis on how events like Brexit, US elections, FED meetings impact the performance of the algo.

Cheers

Anybody trading this live? I added it to Robinhood, and seems something is wrong, yesterday it sent an order for TLT only and not TMF and XIV.

I'm trading a variant of this live in IB. As there are so many versions here, which version did you put live?

Using Maxims Version, actually I think it is behaving fine, it matched the order, I just thought it was going to sync the current positions, but it just waits for the orders to add up. Also, for 2008 if there isn't a bond market collapse/stock collapse at the same exact time, wouldn't this perform well as it would jump into Bonds?

@Takis
Did you try lowering the MA period say 20, 50?

I am also using Maxim's latest version in live trading.Since 7th November, it has made 11.39%, so I am very happy.Most of the profit has come from XIV positions, which I assume was the original intention.Sharpe ratio of 5.27 and Max Drawdown of 2% is impressive.

Anybody else try to rework this to test past 2008? I guess even 40% DD wouldn't be bad with like 50%CAGR and the low DD during normal market conditions.

Hi Elsid,

See the post above. I've asked the similar question before. But it seems my question was taken as "stupid". :-)

Hi Elsid,

Is there any argument to support your "guess"? :-)

Hey Thomas, I mean I completely messed around with all the symbols turning them into SPY and TLT as a "short" so probably messed something up, given that TLT isn't a short anyway but thought would perform well given that bonds rallied, and TLT was the best performing ETF at the time, but with doing that the drawdowns were about 40% or so, just as much as the market.

Given that in good years like recently the DD has not been more than like 15%, I guess you can kind of gauge an 08' type scenario by stopping the Algo at a 20-25% DD, then you know something might be up. Given that it has amazing DD the last 7 years including, all the craziness of Flash Crashes, BREXITs, ect.it seems pretty robust, and might of possibly held up in 08'

But somebody needs to rework the Algo even just as a backtest algo, maybe just use funds like TLT SPY and instead of long XIV go short SPY, just to see how it would of held up in 08'

"I guess you can kind of gauge an 08' type scenario by stopping the Algo at a 20-25% DD, then you know something might be up.". Yes, this is 1 way to do it, i would take it offline when it reach 15 % DD, which is my tolerance level. I have been running my the first version i posted but re-balance every 2 weeks => performance is outstanding. (since October)
Since Nov i put 2 of variations that included value growth stocks algo in my IRAs => Never happier with the performance.
In fact all my fund is running this algo with different risk parameters ( different allocations).

Good luck.

@Mark Varney, i agreed it is doing quite well, my return is little bit more for that version. Just sit back, enjoy the ride and wait until some not normal event then react.

For those that are trading this live, watch you leverage. In running a PvR routine there is unforeseen leverage in the backtest. IE, where there are sells and buys on the same day you may encounter leverage.

We need to build in a check to see if there is available cash to purchase - otherwise the algo, as of today - will issue a buy as soon as the sell is issued and puts the account in margin.

@Nhat

Where is the code for rebalancing? Also, have you noticed a performance gain from the original with the 2 week rebalancing? Also, about how long does it take for all the positions to catch up and match the Algo, given that it doesn't sync current positions.

1/ "Where is the code for rebalancing? " i do not have in this platform. i use different trading platform and it is in C. but you can add it in yourself with an if statement or someone in the forum will help you
2/ " Also, have you noticed a performance gain from the original with the 2 week rebalancing? ". slightly better for this period ( i do not have backtest and stats to conclude that it will perform better in long run).
2 weeks because XIV keep up trend so i get in XIV cheaper and hold longer => slightly better results.
3/ about how long does it take for all the positions to catch up and match the Algo, given that it doesn't sync current positions. <= do not understand the question.

Any ideas on how to ensure the sells happen before the buys? I ran a backtest from Nov 7th and so far the cash hasn't run out for most, but trust me you will encounter negative cash during a rebalance.

How long does the negative cash balance last? Permanently, or until everything gets rebalanced?

In the versions with both Allocate and allocSPY functions I notice that it buys SSO in the morning (Allocate function) and closes the position in the afternoon (allocSPY function). Is that by design?

Cash low hits the day of the rebalance and slowly claws back throughout the day while the other positions sell

I guess that would be an issue with RobinHood if you aren't subscribed to Robinhood Gold and have Margin, even then i don't know how big of an account i would hold with Robinhood given their financial stability, I guess safest bet is trade on IB.

hopefully you have above $25,000 in margin. The cash low grows when the algo makes more money as more shares are being rebalanced.

I think a permanent fix would be to ensure the sells happen before the buys, which I also believe will significantly reduce the gains the last 3 algos generate.

Yeah I just had Robinhood reject my XIV order. So how much margin do you need just $25,000? Or pretty much double the account? Which pretty much every broker gives you, so you shouldn't really run into problems. During the backtests I only see it go above 1 every so often and only like 1.01 1.03 ect.

I found that the record leverage takes the average of the day or the leverage at the close of the day and ultimately week.

The PvR routine captures the cash low/max leverage as of the minute. It does not specifically double the account I just saw at one point the cash low was -$25,000 and I stopped the backtest.

@Tyler, yeah that shouldn't be an issue at least for myself, also what was the account size when you saw that leverage number?

Actually thinking further about this, the strategy would of probably performed well in 08 also, maybe a bit larger drawdowns, I think Takis's poor mans backtest for 08 is flawed, because the function allocVOL uses VXX/ZIV to calculate WVF, unless he fed it custom data there is no way to get this data prior to the funds inception dates.

But given the significant events we have had since 08+, with flash crashes, brexit, euro crises, and given it takes a high exposure to bonds i can see it performing well in 08' also, i guess the only time you would have significant losses would be if both VIX Spike and bonds crash also. Also, in 2011 VIX spiked to about 50% and this only had a DD of about 13%, so in 08' when you spiked to around 80-90, if you double DD, which probably wouldn't happen you would still be looking at 28% or so which is still great for about 50% Annual returns.

The algo above is some good code, just that the leverage goes to 1.43 so returns are lower than they appear.
Try https://www.quantopian.com/posts/max-intraday-leverage to keep an eye on that, it will give you a better chance of doing the best possible.

I looked into the leverage and the overtrading and in the version of Takis Mercouris Nov 20, 2016 I still see a lot of overtrading: SSO gets opened and closed on the same day which can be interpreted as sponsoring Interactive Brokers or other trading partners. The p[erformance will increase if one published that version without the double trading: probably the signal or trade generation should be handled in one function so conflicting signals can be weeded out.

Thank you all for sharing and adding to what looks like solid system system. Is there any explanation of the system anywhere?
Regards,
Douglas

Here is the backtest with PvR Routine: max leverage 1.54 cash low reaches - $37,577

This is the algo from Nov 3rd 2016

After digging in a little further, it looks like it only due to the allocation code - Is there a way to ensure the sales in the allocation are performed before the buys? Looks like all the purchases happen before XIV is sold.

Yeah I think the allocation code needs to be reworked a bit, especially for brokers that charge commissions, for example all the 1 share orders, would also be great to use with Robinhood without running into Margin issues.

I added an EMA (seeded with original 14 value) for the WFV limit. Source of this one is from Takis Mercouris Nov,19.

Question for those running versions of this in Robinhood: How are you avoiding free-riding on unsettled cash? From what I read in the docs it doesn't look like Robinhood's margin (Gold) accounts are fully supported.

Disclaimer: I've never written anything in python and this is also my first exposure

@Doug

Just messed around with your algo a bit, the past ones perform much better and slightly lower DD, don't know if it's the EMA or the slight leverage. Anyway I think the focus for this Algo should be to somehow get synthetic data loaded to backtest it past 08 and even farther if possible, and fix some of the order logic.

Quick question, given that I'm new here if i change the code and re-deploy the algo with current open positions I'm assuming the algo will be able to pick them up and rebalance correct?

@Elsid -- I'm live trading a version of this algo at IB, and sometimes I'll stop/start the algo when I add more funds. Otherwise the metrics on the website are off, FWIW.

It seemed to have no problem rebalancing with the new account value. I have only done this once, but it worked as expected.

Did anyone come up with a solution to eliminate those pesky 1 share orders? The algo also sometimes tries to order more than the available cash balance. I've been manually handling it for now.

@John

Hey John, do you use a 2nd IB login to login into your account and check up on it?

That's good to know, also when it goes into leverage is it permanent or temporarily while it's rebalancing?

Yeah those 1 share orders eat up like $2 in commissions, have to check IB commissions and see the options again I remember there was a per share rate too without that $1 minimum in commissions.

Thanks

@Elsid -- yup, I have a login specifically for the algo and another for manual use.

Since my account is an IRA - no real margin - the algo can't go beyond leverage of 1. IB will reject the order.

I just looked at my account statement, and those 1 share trades ate up about $15 in commissions over the course of a month.

I don't remember exactly, but I thought there was $1 per trade minimum, regardless of number of shares traded.

In another algo I run, I have it filter trades based on either trade size (in dollars) or as a per cent of the overall account, e.g., don't execute the trade if the trade size < $X or less than X% of the account size. I haven't put that code in this algo yet. Probably should.

I think there were 2 options dollar minimum or per share without dollar minimum but you would end up paying higher commissions per smaller orders, I have to double check.

Also please post the order size code when you get a chance, or I guess If we fix the order logic can trade on Robinhood and not worry about commissions I just don't trust Robinhood from a financial standpoint they can go bust, assuming you trust SPIC but I remember reading that technically there was some wording that would make pretty much everyone not able to get anything from the SPIC the way it's worded.

So I'd rather put somewhat more trust in IB given the strict enforcement of margins and risk they take.

I've been tinkering with this algo for the past week and have tried to fix some of the issues I encountered along the way. I am relatively new to Python and quantitative/algorithmic trading so please be easy on me:)

Here's the main issues I found:
1. Two of the allocation methods operate on the same symbol (SSO) and can step on each other. For example "allocate" could give SSO 50% of the portfolio only to have that adjusted to 30% later in the day by "allocSpy" causing leverage issues or resulting in the portfolio not being fully invested.
2. "allocSpy" rebalances every day, often in 1 or 2 share orders, while "allocate" rebalances once a week. This can cause the leverage to get out of wack since "allocSpy" uses a hardcoded value instead of the actual amount of the portfolio it represents. Also generates a lot of extra orders
3. Majority of end of day over leverage issues were caused by using EDV which would consistently not fill orders.

Changes I made:
1. Have "allocate" work with 100% of the portfolio
2. "allocSpy" only triggers orders if bullish/bearish spy positions sized by "allocate" should be swapped. Simply swaps the sizes of the positions.
3. "allocSpy" and "allocVol" don't execute on weekly rebalance day.
4. Upped to all 3x leveraged etfs except for the small-cap since I couldn't find one that consistently filled. Going with the higher leveraged ETFs didn't really change the risk metrics much versus non-leveraged etfs.

The overall result has fairly similar or better sharpe/sortino ratios compared to the previous algorithms. The volatility is higher, as is the max dd, but not astronomically so. The one glaring difference is that beta is near one, which is much higher than the previous algos. Also cuts down on total trades by about 700 over the six year backtest period. There are four partial fills that cause end of day leverage to go above one, but those are corrected for by the next weekly rebalance at the latest.

@John I'm thinking of opening a Roth IRA with IB and running this algo on a portion of it and was hoping you could answer a few questions since it sounds like you're doing something similar:
1. Are there any restrictions on buying leveraged etfs such as TMF or UPRO?
2. Are wash sales not an issue due to the tax-free nature of an IRA?
3. Is main benefit of a margin account you get the ability to use sales proceeds immediately?

Hi Caleb --

To answer your questions --

  1. Are there any restrictions on buying leveraged etfs such as TMF or UPRO?

None that I know of. I trade UVXY, SSO, SDS, etc without complaint. IB will send you a notice letting you know you are trading a leveraged vehicle.

  1. Are wash sales not an issue due to the tax-free nature of an IRA?

Not a tax expert, but as I understand it, wash sales only pertain to your taxable account. IRAs get involved if you trade "substantially the same" security in both taxable and non-taxable accounts. Then, if you buy a security in the IRA that you sold in the taxable account, the wash sales rule applies to your taxable account. One way to avoid it is to not trade the same security in both accounts.

  1. Is main benefit of a margin account you get the ability to use sales proceeds immediately?

Yes. With a limited-margin IRA, you avoid the "free-riding" restriction in non-margin IRAs. Otherwise, you have to wait the 3 day settlement period. ou still can't exceed a leverage of 1 though.

We'll have to look further into why your modded algo now has a beta of 1. Draw down at 25% is a bit rough for me as well. In roughly the same time period, the version I'm running has a beta of close to 0, draw down of 10%, but far less return than your version. More like 380% vs your 2800%.

Hi John

Thank you very much for the info.

Regarding the beta issue, my thought is some of it has to do with the previous versions stepping on their spy allocations and taking them from high down to .3. Since it spends a lot of time in whatever spy etf you use, this could be significant. Also using the 3x etfs could be a factor:)

Here's a version that doesn't use leveraged etfs and only uses 70% of the portfolio. Returns and draw down more in line with your version and around .5 beta.

Guys just reworking some of the ETF's such as using TQQQ and removing 3X small cap (biggest contributor to DD) You get this:

4750% and around 15% DD

Someone needs to verify this code because results seem mind blowing with this DD

Enjoy : )

My only concern with this Algo for future proofing it seems that a tremendous amount of gains come from Treasuries, if we get into a bond bear market DD might sky rocket, and performance suffer tremendously, I have tried playing around with using inverse etfs instead of treasuries but it still doesn't compare.

It would be nice if someone did some research, or even messed around with the strategy to get it to comparable performance DD levels, with just going inverse the underlying long securities. Even if it performs a 4th of the above at 1000%, it's still around 50% Annual returns with low DD.

Hi guys,

Returns above 4000%? I would like to say you are getting more and more crazy !!! :-)

@Thomas I guess Nasdaq outperformed S&P hard lol, dem FANGS

Hi all,

I've just finished to work on a previous version of this algo. My version is a little bit different from the original one:

  1. It trades TLT instead of EDV --> the first one seems more liquid
  2. It sells on Monday and it buys on Tuesday --> in this way it avoids a leverage over 1.0
  3. alloc_spy function has been split in 2 functions: a sell one and a buy one, which work one at the beginning of the trading day and the other at the end --> in order to avoid a leverage over 1.0

Let me know if something in my version is wrong.

Hi Elsid,

What does S in FANGS mean? Till now I've just heard FANG.

Besides, you said the QQQ outperfomances the S&P. But in Trumps time maybe it could change? Maybe the traditional values will take over the party?

Thomas

Hi Gregorio,

Your codes look quite clean and well-formed.

Thanks Thomas.

Here I have a new version of my previous algo.
The main updates are the following:

  1. It uses different ETFs (the ones of the last algo of Elsid).
  2. AllocSpy_buy now doesn't work: if I try to enable it, the leverage goes over 1.0. I still have to work on this.
  3. Return is 100% higher than the previous, but the DD is double: from 13% to 26%.

@Elsid your algo ends up having a leverage - although only for a day until the sell orders happen - of over 2. I think if you get the leverage under control your returns will diminish.

Wow, I'm very impressed at how far this post has come along! The WVF indicator is quite a powerful tool for VIX ETN traders. I've been working with Cesar Alvarez on a really cool long-only XIV algo that's based on the RSI; will post updates soon! Meanwhile, here is one of my other more "conservative" VIX ETN algos that only trades XIV, Junk Bonds and Treasuries.

I combined this simple junk bond momentum algo (50%) with a newer version of this XIV algo that uses the WVF (50%). However, for the algo below, the WVF uses exponential moving averages of itself as variable triggers to trade instead of just using 14 as a fixed value for the trigger. If you wish to make it more "conservative," simply allocate less to the XIV portion. 25/75 is a pretty decent mix, too. My buddy Jacob and I actually submitted this algo for Contest 24, but only as a rough draft to feel out the Contest's requirements. Funnily enough, it's 2nd place on the un-hedged leaderboard right now. It was 1st place on the un-hedged leaderboard for all of December.

I highly recommend using variable triggers instead of fixed ones for the WVF as well as most indicators.

Author: Kory Hoang
Developer: Jacob Lower

Here's a tear sheet for the algo above.

@elsid, @thomas...Hate to rain on the parade but I think the QQQ results are due to dozens of TMF orders being partially filled. Starting with a lower initial cash value that only generates 3 partial fills, the results are actually not as high as SPY. The risk ratios are better though, and beta and max dd is lower.

@Caleb - I agree. If you backtest each year by itself returns are not as good and drawdowns increase.

This leads me to believe partial fills.

Yeah completely forgot about the partial fills, Just turn the slippage on to 0 which is more realistic given that there is hundreds of million and billion of dollars worth of volume, when I turn it to 0.

I get 3297% -21.2DD
only partial fill is some XIV shares on 12/03/13

Also, an interesting thing to think about, if it's possible to copy the logic of the partial fills on TMF, maybe lower it's allocation to get results more in line with the backtest?

Caleb, you could also try this:

Guy it's the same problem partial fills uncomment this line set_slippage(slippage.FixedSlippage(spread=0)) and you will see more realistic results. Now like mentioned before i wish you could code up something to copy the random partial fills, would turn it into an amazing strategy lol, maybe try introducing random partial fills on TMF.

Caleb, just in case you wondered, there was no code change at all. Not a single character.

Only the capital was increased to $100k. That is a 58% CAGR!

2x leverage happens on rebalance days then normalizes back to ~1 maybe a little over. So you will need a margin account and pay some margin fees

Elsid, the set_slippage is commented out, therefore, the strategy reverts to its default commissions and slippage values, if I understand Q correctly. This makes it already more realistic.

Tyler, the 2x leverage is more than acceptable. You could use the following as a ballpark figure: A(0)∙(1 + r – l)^t. An estimate gives: 100,000∙(1 + 0.58 – 0.05)^t. More than enough left to pay for the margin interests. Note that currently IB charges less than 0.03 for margin.

I think their default is overkill and doesn't make any logical sense, how can you not fill a few thousand shares by end of day say a $5,000 order, for an instrument that has $1,000,000,000 in trade volume? There is no way you will have filling issues with such highly liquid ETFs unless you are trading into the millions, or a flash crash or something.

Elsid, yes. It is part of what I like about Q. They can at times give you more adverse simulation conditions than in real life. And that is good. It forces you to do better, and be better prepared for adversed trading conditions.

Is the issue how it is traded, or how much does it make?

I have not read the code yet, but now, I will. I want to know what makes it tick.

Adding a smidgen of leverage by raising the RISK_LEVEL to 1.2 would generate $2 million more in profits. It looks like a small price to pay for a 62% CAGR.

Word of caution, as discussed earlier we might be heading for a bond bear market - a huge portion of the returns in this algo is from treasuries. I would hate to hold a 3x Bull Treasury during a bear market.

Edit: The beta is also above .80 on the last couple algos - way to dependent on the market for me.

Tyler, your concerns are justifiable. But, even if you have an automated trading strategy, you can always shut it down at a moments notice for whatever reason. Meanwhile, for the past 6 years, the strategy would have been doing fine. During those 6 years, everyday you had news stories that the market was going down, soon, real soon. During all that time, the strategy would have prospered with an impressive CAGR. High enough to put you in the 0.1% of portfolio managers.

You must have noticed that the strategy has some built-in alpha. I still don't know exactly how it is generated, but I can see it is there. While most strategies show a small spread compared to the SPY benchmark, notice that this one has an increasing spread, not just for small period here and there, but over the entire duration. Using an equation, I would write:

A(t) = A(0)∙(1+(r+α) – l)^t = 100,000∙(1+ 0.62 – 0,05)^t , as in the previous post.

Some might not like leverage, but I see it as a matter of taste, a matter of choice.

Tyler, I would like to add concerning your last statement that in a rising market, you also want your portfolio to rise in value. It will therefore be highly correlated to the market.

One could push for more, increase the spread compared to the SPY benchmark. Doing so might increase the beta, drawdowns, and volatility.

Is pushing for a RISK_LEVEL of 1.4 worth it?

Again, it is a matter of choice. Here, the equation would give:

A(t) = A(0)∙(1+(r+α) – l)^t = 100,000∙(1+ 0.69 – 0,05)^t. A 69% CAGR!

And the backtest:

Again Guy, you are painting a false picture with Quantopians default fill, you are getting much more performance and a much lower drawdown than you would experience in real trading, contrary to Quantopian being more realistic in backtesting, it's the complete opposite with small assets and very liquid ETFs.

You even mentioned yourself you want the worst results possible to improve from there, the worst/realistic results are getting filled completely which will most likely be the case in real life trading with such low amounts.

Raising the RISK_LEVEL of 1.6 might also be worth it? It will raise the beta, volatility and drawdown. But, will generate:

A(t) = A(0)∙(1+(r+α) – l)^t = 100,000∙(1+ 0.76 – 0,05)^t. A 76% CAGR!

Most of the time, the leverage stayed below 2.

We should note that no one could offer a 76% CAGR without taking some added risk.

Here is the backtest:

Elsid, I do not agree. The worst scenario in a rising market is not to be filled, because then you are missing on profitable opportunities. And one thing you want in a rising market, is to have your orders filled. Therefore, the partial fills, or canceled orders become adverse conditions, worse that what the market has to offer. But, like I said: I like that.

I want adverse trading conditions (partial fills, delisted stocks, outliers, black swans), whatever they have. Make it as real as possible.

However, I did not say that I wanted the worse result and go from there. I only mentioned trading conditions. But whatever the results, I will go from there. Just as the last few backtests that have been presented.

Wouldn't this be easier? Aren't the returns obviously 4000% higher?

Also CAGR is closer to 100-120% according to investopedia, unless their calculation is wrong.

Pushing the RISK_LEVEL to 1.8 will raise the average leverage to about 1.8. Evidently, it will raise volatility, beta, and drawdown.

This series of tests was to find some of the limits of the trading strategy. At what drawdown level would I say: enough. At 0.10, 0.15, 0.20, 0.25, 0.30 or 0.50. It is a matter of choice. One could select his acceptable drawdown level, and go from there.

Some are more risk adverse than others. But, if you do not do the test, to see how it would have done in the past, how the trading strategy would have behaved, how could one have an idea of how it might do in the future.

The game is a CAGR game, and as such one should seek the shortest doubling time. At a 10% CAGR, it will take 7.3 years to double one's trading capital. At that rate, in about 22 years, one should have reached 8 times the initial capital. While at a 60% CAGR, after 22 years, one might have reached 30,000 times the initial capital. That is what is at stake. What is your drawdown tolerance?

One thing is sure, there will be drawdowns no matter how you look at it.

The 1.8 RISK_LEVEL backtest:

My point on the previous message and backtest is to try to get through to you guys.
In the GF code a little while ago showing 12000% returns, it spent almost $12 million. You could think of that as just pure leverage except Quantopian's returns calculation is done on the original $100k, ignoring the millions. Let me know if you find a broker who will loan you millions without any accounting. Keep track also of cash_low as one way to catch that, apparently watching max intraday leverage is not enough. PvR is the other route, the one I prefer.

In the GF code, the returns show that it made 12,054,048 using the 100,000 that it started with.
No, it made 12,054,048 using 11,756,331. Maybe I'm just old-fashioned, my math teacher would have said that's 103%, not 12054%.

@Blue please rephrase in more layman terms. Only Issue I see with the past leveraged backtests, including the higher drawdown from things probably filling in real life which is still only about 4-5% higher.

Is that leverage goes above 2, which broker gives you more than 2x leverage?

Also, we can all each play with backtests all day long for each risk/performance scenario, I'm more interested if someone can figure out how this would of performed during 08? Would it have dropped XIV during the largest drops of 08?

In laymans terms, in the real world you can't borrow millions and then just declare that to be profit. There's a discussion on it. If anyone is able to comprehend my point, please explain it in your own words different from mine.

@Blue,

Whatever I ignore the cumulative values anyway, and just add up starting amount to finishing amount and figure out CAGR, and go based off of that, I'm assuming DD values are off too, so add another 5% to them to be safe. I'm going to be switching to the old version before the latest updates with some leverage, don't mind the 1-2 share days per month, what's that like an extra $20 in commissions? I guess I can pay that as insurance for a lower DD.

Until then I'll wait for a reworked version that fixes those 1-2 share problems, leverage but doesn't effect performance. Unless the numbers are completely wrong according to Blue, then can't really compare different Algo versions apples to apples.

Take a look at this Backtest posted Blue, does the true performance seem accurate? Just taking ending value/starting value it should be around 26X, so the total performance is pretty close.

Take a look at this Backtest posted Blue, does the true performance seem accurate?

Profited 260,356 spending 173,247 for real returns of only 150.3%

A normal person would look at returns of 2591% and conclude that returns were 2591%. That's only true to the degree that 100% of the initial capital was utilized/risked/activated and no more. For those who don't believe it, the returns calculation used by Quantopian currently can be seen at https://github.com/quantopian/zipline/blob/master/zipline/finance/performance/period.py. So the returns on amount invested (which is what every investor wants to know unless I'm mistaken) can only be obtained by keeping track of how much was invested. Thus to succeed don't be a normal person. My real money algo is up 80% after 62 days by the Quantopian calculation however since it has not yet spent all of the initial capital, return on the amount invested/risked is 104%. Use PvR and see clearly. To your wealth! :)

Ok I see what you mean, and quite don't understand why you would want to calculate returns that way lol. And no people don't want to know the returns on the amount invested, they want to know that their investment of 10K grew into 200K. It's not like you started off with 173K, you started off with 10K. The 173,000 already has an exponential return in there as it's not your money, but grew from the initial 10K.

No offense but I think we have officially entered into bizarro world here, people talking about total spent money as far as returns are considered, and thinking "proper" back-tests are the ones that have half filled data(even though quantopian specifically mentioned they made it so aggressive for the purpose of low volume stocks, not Billion dollar securities), wrong CAGR calculations.

But it's still good makes you think about backtesting, and calculations more deeply.

Hi guys,

I would like to say, the total returns is not the annually return. The total returns is really calculated from (End / Beginn) *100. But the annually return is calculated somewaht like (End of this year / Beginn of this year) * 100, right?

This means, the total returns seem very high, said more than 6000% or even higher. But the annually return could be only 150%, 100% oeven lower?

Cheers

To myself, I am more interessted in annually return.

Blue, I think the issue is that the leverage is not held the entire time. it only happens during rebalance and drops back down to ~1

Blue, going for leverage is a matter of choice.

The equation for the leverage was given, it is:
A(t) = A(0)∙(1+(r+α) – L)^t. As long as: α > 2∙L, you will be ahead. As long as your drawdown does not reach -100% as the example you provided! Since then, leverage or not, you are out of the game.

The leverage fees are pulling you down, but the (r+α) are pulling you up. And this is compounded over time.

To calculate the difference, one could say: A(t) = A(0)∙(1+r+α)^t without the leverage as Q reports it. And use: A(t) = A(0)∙(1+(r+α) – L)^t to have an estimate of the total margin cost.

IB charges less than 3% margin. So make the margin 5%∙100,000 = 5,000. On the other side, r+α could be higher than 20%: 20%∙100,000 = 20,000....

Taking the 1.4 RISK_LEVEL backtest, you get:
A(t) = A(0)∙(1+(r+α))^t = 100,000∙(1+ 0.69 )^t, as reported by Q, and
A(t) = A(0)∙(1+(r+α) – L)^t = 100,000∙(1+ 0.69 – 0,05)^t, when deducting leverage fees.
The difference between these two is: 384,166. Those were the added expenses associated with the higher return.

The added leverage resulted in a total return of: 8.677 millions – 384,166 = 8.292 million.
Probably worth the expense for some.

In payoff matrix notation, any stock trading strategy can be represented as:
A(t) = A(0) + Σ(H.ΔP). When you add a 1.4 leverage, all it does is: A(t) = A(0) + 1.4∙Σ(H.ΔP). It is not used by the price, but by the inventory holding. Meaning that the bet size increases by 1.4. There will be margin charges as illustrated above.

Some like it, some don't. I see it as a matter of choice.

However, a caveat, not all trading strategies can support leverage, some are really bad at it. On the other hand, some strategies can, and I think this one can.

@Thomas

Yes, and you also can't take the total cumulative amount and divide by 6 years for example still gives you a wrong number. You can take the starting amount and ending amount to figure out CAGR, just use a calculator online.

The other thing I can't get over is the dismal performance in 2013 compared to the market - with the high beta you would think it would have correlated more. I mean it made money, but did not outperform a simple buy/hold strategy.

With all the talk of leverage and margin I just wanted to share my latest code that attempts to not go into margin at all. Basically just breaking up buys and sells.

Max leverage never goes above 1 and cash low is -$53 and that isn't until well into the backtest.

Also added PvR, which I don't 100% understand yet, but the results seemed good.

Note: the cash return is basically the same as when running buys/sells concurrently. % return is just somewhat lower because I upped the initial cash a bit to have a little cushion.

@Caleb

How come you removed TLT? Maybe it would lower drawdowns having some sort of allocation between TMF & TLT like the past algos.

Great results and discussions folks. thanks for sharing .

If I am not wrong, it's essentially a min-var strategy. it's little bit tricky (and interesting) when thinking adding volatility based ETF into the porfililo since volatility (variance) is what the core algo mean to minimize.

Now those without margin accounts can make use of this code too, really great, Caleb Sandfort, that's some magic in your code above.

This was a start toward adding TLT in zero-beta targeting, in case anyone wants to work with it. It does reduce Beta however then creeps upward with only a few shares being traded and cash drifts downward. Also in the example code/link most of the profit came from TLT so it should not be necessary to lose this much in returns IMHO. Note that ideally one would dynamically adjust stock picks coming in from pipeline based on their Beta values instead I think. Pardon all the vertical alignment going on, I just have trouble reading code without it.

Ran Blue's latest version with $100k. Impressive CAGR: 45.5%.

Here is the backtest. No other change than the initial trading capital.

For those that don't like being on margin, this is great. It can put anyone in the top 0.1% of portfolio managers where all you will live by is your CAGR on the total initial capital that was put in your care.

I haven't yet had the time to study this algorithm, but I only see backtests going back to 2011. Has anyone found any appropriate substitutions to backtest this algo against data prior to 2011? The returns everyone is posting seem too good to stand the test of time. Looking forward to digging into it and understanding it better...

Jake this was discussed earlier just search the thread for '2008'. Part of the issue is that some of the ETFs used here didn't exist long enough. Some people did try to create simulations which were interesting. Search for Takis Mercouris for his insightful post.

@blue @elsid do you think it would be more conservative to set slippage to .02 for these etfs?

@Jake @qqt

I think to get a proper pre 2008 backtest we need to get synthetic VXX/XIV data, you can then play around with the ETF's such as using none leveraged ones like just plain ol SPY and TLT then just apply Margin to see how a 2x 3x fund might of performed.

Everyone should read through and thoroughly understand any algo before trading it.

Reading through the lastest one, I see lines of code like this with no comments:

    if is_date(2013, 12, 3) or get_open_orders():  
        return

    for i,stock in enumerate(context.stocks):  
        if stock in [context.spyish, context.tltish]: continue  
        if is_date(2011, 3, 1) and stock is context.mid_cap_stock:  
            ...  

When you are altering algo logic based on dates, it means that it has look-ahead bias.

@Mohammed

The first date check is just skipping a weekly rebalance because XIV order wouldn't fill at all no matter what on that day.

Second date check is just switching what mid-cap stock is used because the first one wasn't available before that date.

Thanks for clarifying Caleb. I have two issues with this:

  1. Why isn't the code commented for this? This is programming 101.
  2. This is still look ahead bias. "I can't order XIV so I'm going to exclude other orders too" - that is not real world trading. There may be times that your orders wont fill - that is something you have to live with.
    Also, switching mid-cap stocks part way through a backtest is basically using hindsight - i.e. look-ahead bias. Wouldn't you agree?

After running Blue's latest version of this program, I notice that slippage was set to zero. Therefore, had to redo the test. Put it on Q defaults by commenting out the slippage line of code.

Here are the results. They are more modest having a 25.0% CAGR. The difference in these two simulation has to be due to the commissions and slippage charges, since I did not change any other line of code.

The impact is considerable.

I do have a question. Why did it take hours to run this test?

Why did you comment out the slippage?

The default slippage does not account for these highly liquid ETFs. There is no way it would take all day to fill 1,000 shares of TMF.

@Tyler

Just Ignore him, tried explaining this to him like 10 times, he also calculates CAGR wrong. Let him be.

@Mohammed

  1. I'm used to working on code that only I work on and am intimately familiar with, so sometimes my comments are lacking since I know what is going on and don't have to worry about other people trying to figure it out. But you're right, if I'm going to share it, comments should be added.

  2. I agree in general. With this particular trade though, XIV traded nearly 8M shares that day and it couldn't fill a 800 share order, didn't do a partial fill or anything. Seemed like a bug with Quantopian and it was messing up my overall max leverage tracking which is something I was spending a lot of time trying to keep under 1. The change in returns is miniscule when commenting the check out.

With the mid-cap stock switch, it happens only 2 months into a 6 year backtest. SCHM, the etf it switches to, was from my mid-cap etf research the mid-cap etf I wanted to use, wasn't available until mid Jan, so I just gave it a month to build up it's history so it would function properly in the allocation code. If the switch had occurred at a specific time in the backtest when I had determined they should be switched on based on past/future performance then I agree that would definitely be an issue.

And as covered earlier, all of this could have been explained with comments:)

@Mohammed,

I don't think the date plays an important role it was just excluding that one date, and allocation to midcap exist anyway in the code, not like the date matters since, 2011 is basically when most of the ETFs started trading anyway.

Lastly, these are the newest versions of the algo that might have these potential issues the nguyen & subsequently Maxim's versions don't, and if you see my last backtest I am using Maxim's version, even though it's less effective in order management with the 1-2 share rebalancing, but has better performance per DD levels.

Elsid,

The equation for CAGR is simple.

CAGR = (A(t)/A(0))^(1/t) – 1

Nobody can get this wrong. Nobody. If some would like to analyze it using other numbers, well, they better put their definition, and equation on the table.

Just in case, they can look up: https://en.wikipedia.org/wiki/Compound_annual_growth_rate

Whatever you trade, there will be commissions. You can set aside slippage only if you use limit orders only. Otherwise, you are bound to have slippage. You are bound to have partial fills, especially if you can have only 2.5% of the available volume on a minute bar. So, these expenses need to be included in any backtest. Otherwise, again, the simulation done will not reflect reality.

And if a trading strategy does not reflex reality, it is not worth so much. Because when you will put that trading strategy live, it will certainly hit you in the face.

I generally agree with you Guy. Commissions should always be included. Setting it to the IB commission structure probably makes most sense:

    set_commission(commission.PerShare(cost=0.005, min_trade_cost=1))  

I also agree regarding slippage. Slippage obviously varies depending on the trade size. Placing 7 figure market orders will often lead to slippage, no matter how liquid the asset. Placing 4 figure market orders on a liquid asset will probably not experience much slippage, if at all.

I think many of these algos are catering for people trading on Robin Hood with small accounts - so they assume no commission and no slippage.

@Guy

Unless you are using wrong inputs, or unless Investopedia is using the wrong formula in their calculator, which they probably haven't since they have a whole section with the same formula, then I have no Idea how you get the CAGR number you do, probably from the number you are taking? In the Daily Positions Gains, you have the starting and Ending Value, that's the values I use and get a different CAGR from you.

Feel free to add commissions, but to add partial fill data is again pretty useless unless you are trading MULTI MILLION DOLLARS, I'm trading this live and have no issues with partial fills on my low amounts of 100-300 shares, because again these instruments trade millions of shares a day.

And nobody is talking about partial fills for the minute, and then it fills next minute, we are talking about partial fills for the whole day that don't fill the whole day, such as couldn't fill 300 TMF shares all day. WHAT??? which again is impossible unless trading 50MM+ if even that, some of these ETFs have over a Billion dollars traded each day.

You can argue about large size orders moving your fill price, that's where there will be slippage, but the current partial fill order canceled for the whole day is utterly meaningless and downright misleading.

@ Elsid what version are you trading live? There are now many versions of this algo. Thanks!

You can mess around with this to simulate somewhat of realistic slippage model, even with this given that the Algo is trading at the most volume intense time periods of 5-15m at opening and 15m at closing, you will probably never have 25% of the bars total volume, again Unless trading million dollar orders, if that. And what will your price impact be for a highly liquid ETF 1-2 cents?
With a 2 cent impact on price, results are so small not even worth talking about, even 5 it still performs well, again assuming you ever hit the 25% minute volume limit.

set_slippage(slippage.VolumeShareSlippage(volume_limit=.25, price_impact=.02))  

@ Tyler

Maxim's version with some changes to take extra leverage, and my ETF changes, the last backtest I posted is what I'm trading live. Clone it and trade it if you like.

Tyler, true. However, that is the trading environment provided.

Take your example of TMF for instance. On the last test, on November 8, 2016, there were 189 TMF trades spattered all over the trading day. 70 trades for over 20 shares at a time, the rest (119 trades) were for 20 shares or less.

Some might not want to consider slippage, or commissions, but they do have an impact. And it is higher than they think. Trades were occurring about every two minutes, at whatever price there was. Sure, you will get a day's average price of some kind at the end of day. It could be close to the first taken trade of the day, but then again, it might not. From what the trade report gives for that day, trades were taken from 23.33 to 24.00 per share. This is more than a penny per share of slippage. This is more than a penny per share of commissions.

Some might not like to look at this, but that is fine with me. I will continue putting commissions and slippage in my tests. It gives a more realistic output.

From the two test made, one could get an estimate on commissions and slippage charges simply by comparing the two tests.

Formula: CAGR = (A(t)/A(0))^(1/t) – 1

Test 1: CAGR = (2381204/100000)^(1/6) – 1 = 0.455 = 45.52%

Test 2: CAGR = (659900/100000)^(1/6) – 1 = 0.2502 = 25.02%

Cost of slippage and commissions: Test 1 – Test 2 = 1,721,304

Those are the numbers, some might like it, others not. But, those are the numbers.

Guy

You seem like a pretty smart guy, get your head out of the numbers for a second, take a breath and really think about this statement from a common sense standpoint.

"Take your example of TMF for instance. On the last test, on November 8, 2016, there were 189 TMF trades spattered all over the trading day. 70 trades for over 20 shares at a time, the rest (119 trades) were for 20 shares or less."

Do you really think it will take all day to fill 189 TMF shares? Which has a volume of 500,000? Clearly this should tell you that there is something wrong with Q backtest especially when filling criteria is applied.

Yesterday in my live account it filled 132 shares of TQQQ instantly, not all day, yes it sold TQQQ which a different version of this algo bought, before i replaced it. All Instant fills.

http://i65.tinypic.com/29n6tz4.png

Please use your common sense

Elsid, I do not design trading strategies for small accounts such as yours.

As part of my tests, if I wanted to see something like you said, I would increase the 2.5% allocation of available volume to something like 75%. It would take at most a few minutes to get a fill.

But, you are not alone trading. And that is the point Q is making. The volume you want might not be available on the very minute you issue your order, there might be slippage. And that is common sense too.

Anybody on Q should be looking at a future, and test outcomes to be as realistic as possible. Even if it is under adverse conditions as the last test provided, the better. That is what you want to know. What are the limits of this trading strategy? Can it do well under adverse conditions. Can it last? Does it scale well or not? Will it flop on year 15?

It is a pity that we can not test for longer than 6 years on this one. I usually take 20-year testing intervals. I want to see if a trading strategy can survive over a 20-year period. It will also reveal strengths and weaknesses. It will give a better image.

For instance, if a trading strategy can not scale up, where do you want to go with it? Play peanuts for peanuts for 20 years? That too can be a matter of choice. But, it is not mine.

I see a diamond in the rough in this trading strategy. I'm still not familiar enough with all the code to isolate it. But, I know it is there just by looking at the net liquidating value chart generated by Q. So, I will be making more tests in order to isolate what makes it tick. Call it forensic strategy investigating.

Ok Guy,

Again do you what you like Quantopian admits themselves that there filling model sucks, and they are working to improve it. You honestly think you can't fill 1000-10000 shares in 390 minutes? Yes I understand you trade large accounts, whats large to you? The largest backtest I've seen you run is $100,000, and in real life you will have no problem getting filled.

The issue is I ran a backtest and got insane results because orders weren't filling by end of day and it gave over optimistic results, not filling works both ways, it can make results more optimistic and seem worst.

https://blog.quantopian.com/accurate-slippage-model-comparing-real-simulated-transaction-costs/

Here is a comparison of default slippage showing better results than, slippage turned off: Both of them have commissions on.

Now with slippage off worst results:

Also, updating my live IB account to trade the above.

Hi Aliaj,

One can see the leverage is about 1.38 (almost all the time). This means you borrow 38% of your own capital and you have to pay the interest. Right?

If you realize this point and don't worry about that, why not increase the leverage to 1.5 or even more? :-)

Because the rebalance was never changed. So Elsid's latest algo hits a total leverage of just under 2, which is the max from IB.

And yes you would pay interest but it is normally 2% or less - so if you can return ~$2K per year you cover your interest - risky

Yes Thomas you pay interest but it's worth it for example over that 6 year period you get about 500% more, and you pay only 2%/y , even better sign up with Robinhood Gold and get Margin without %. Hey Tyler also where can you see total leverage? Isn't it 1.4 Max?

Lastly, Thomas I didn't increase it more because I didn't want to increase my DD more, but feel free to do it, if you don't mind a higher DD.

@Elsid this is your latest algo with only the PvR routine added. The issue is that your record.leverage only takes the leverage at the day close. The PvR routine takes it by the minute. As you can see max leverage = 1.97 while day close leverage hovers around 1.3-1.4

@Tyler

Got it thanks, also what is cash low? The amount of leverage being used?

Welcome. Yes, cash low is the most cash you would have borrowed. But it does not mean you continued to borrow that entire amount. It normalizes back to ~1.37-1.40 after rebalancing is completed.

Got it Thanks. I was hoping Robinhood provided enough margin but they don't they haven't even updated their description of gold, which really makes me lose all credibility trusting these morons with my money.

I think Caleb's algo will work best with Robinhood as long as you stay under 100K and your latest algo would work best in IB since they provide margin (and update it) automatically as your account grows.

Might deploy them that way, but would hate to put all my eggs in 1 basket so to say

Yeah same but I haven't found anything comparable to this performance risk level. The way I'm running it, at least with my version is, if it hits above 20% DD I will stop it, given that the last 6 years the most it's ever been is like 16%.

The following notebook makes a case for this trading strategy.

It is not mine, meaning I am not the author. All I did was make some changes in the parameter values, added a couple of stocks, and that's it.

Hope it is helpful.

Hi Elsid,

How about if you take out the IJR, SSO and IJH from the context.stocks and just keep the TMF and XIV?

context.stocks = [sid(32270),  #SSO  
                  sid(38294),  #TMF  
                  sid(21519),  #IJR  
                  sid(21507),  #IJH  
                  sid(40516)]  #XIV  ]

The SSO, IJH and IJR are more or less similar, right?

Cheers

I felt like this algo had been overoptimized to the extreme, baking in a lot of hindsight. So I decided to try to get some out-of-sample data to try to test this theory.

I change the leveraged ETFs to trade 3 times the standard ETFs, e.g.
TMF becomes 3 x TLT
TQQQ becomes 3 x QQQ
also, XIV becomes -1 * VXX

This allowed me to run this algo from 2009 to 2011 - i.e. out of sample data.
The result is below. Notice how it is not like the curve of 2011 onwards.

Here is the exact same algo as above but from 2011 onwards.

My opinion is that there has been some curve fitting, but it is not as bad as I expected. I wish we could backtest it for 2007 to 2009, I think that would be a lot more revealing.
I would say that there is a huge amount of market exposure (note max leverage), which has been masked by the use of leveraged ETFs, but a recession or a black swan event could cause a huge draw down on this algo.

"I felt like this algo had been overoptimized to the extreme, baking in a lot of hindsight. So I decided to try to get some out-of-sample data to try to test this theory."

Totally agree, and the choice of portfolio stocks works extremely well in an environment of QE, basically inflating stock prices, declining rates, and low volatility from the comfort that the FED will be around to inject more money if the markets had a sneeze. Having said that, I think there's a lot of value in the portfolio optimization code and the synthetic volatility with the signal. Most variants of it would not outperform the market in the most recent period of volatility 8/15-1/16.

I trade a variant of it for a small portion of my portfolio & really keep my eyes open for signals of trouble.

@ Mohammad Forouzani
you make a good point of test OOT sample. However, your method below may not work. In your code, you still use TMF,TQQQ,XIV for optimize their allocation while there is no historical data available for it.

I change the leveraged ETFs to trade 3 times the standard ETFs, e.g.
TMF becomes 3 x TLT
TQQQ becomes 3 x QQQ
also, XIV becomes -1 * VXX

@Thomas

Try it, but it doesn't help performance and increases drawdown a lot.

Hi Elsid,

I will take out the SSO, IJR, IJH and at the same time I keep the leverage under 1. :-)

Mohammad, if you cripple a trading strategy like you did in your code, it won't do much, as you have illustrated.

I can do that too.

But, the purpose of designing trading strategy is not to force them to do less. It is to extract as much as you can in profits under the constraints of limited capital and uncertainty.

One should not confuse over-fitting and outperformance. Here, in this trading strategy, I only modified some of its parameter values. I did not change its procedures or trading logic.

When you use 3x leveraged ETFs that are highly correlated to the market, it better show. Not because you over-fit, but because there are 3x leveraged ETFs.

The OOS backtest doesn't look terrible - still does not hit 20% drawdown.

I found that this algo actually underperforms in a "raging bull" scenario like 2009/2013.

My last notebook showed impressive test results.

Yet, my book says that one could do even better. One can start with a trading strategy having some built-in alpha, a positive long term edge, as was presented in part one. And build from there. The portfolio equation to be used is:

A(t) = A(0) + (1+g)^t ∙n∙u∙PT.

Raising g will increase the total output. You don't need to push by much since there is a compounding effect in place.

As a demonstration of the phenomenon, I used the same trading strategy as presented in the previous notebook. Raised its g by 1.5%. A minor modification really, yet, the impact is noteworthy.

The change of a single variable, a single number, and it raises the performance level. It is understandable since (1+g)^t is a compounding factor.

This is not an optimization factor, it is not over-fitting either. None of the trading logic was changed. The trading strategy was just taken as is. It was a request, an administrative one at that, to just let it do more of what it does.

There was no change to the trading strategy itself. It kept its signature. It is only that you demanded more.

Hope it helps. At least, it can give ideas as to how far one can go.

@Guy Fleury
Is it possible you share your modified algorithm mentioned in the notebook, please? I also tried to run the algorithm in your notebook, but it said no such algorithm ... Thank you very much.

Ethan, my intentions are to provide the needed tools to help you design your own. That is what my book is all about. This way, it will be your program operating on your terms, and for your benefit. You would know exactly what it did. You would know its strengths and weaknesses.

And then, I would not be responsible for whatever screw-up you might design. I would not be responsible for you not doing your homework. Or operating a strategy as if a black box.

I have cloned and modified 4 trading strategies as seen on Q over the past few months. Pushing them all to higher performance levels using the same principles as elaborated in this thread.

I still have not finished reading the code of this particular trading strategy. This means that I do not know what really makes it tick. I was able to locate its pressure points and apply my formulas to it. Resulting in the simulations you see. But,

I too have to do my homework. What I provided up to now is just part of my preliminary forensic investigation. What was presented say: there is something there. Now my question is: what is it?

@Guy Fleury.
i understand what you wrote about pushing the strategy to limit to get some more juice out of it. which im going to do to this strategy.
So i see your equations : A(t): accumulated value a time time
A(0) at time t=0, g = growth rate factor ( leverage)
Could you fill me in for the rest of variables.oh how you come up the equantion in first place ?. Thanks

Nhat, the equation: A(t) = A(0) + (1+g)^t ∙n∙u∙PT. Is an expression equivalent to the outcome of a payoff matrix: A(t) = A(0) + (1+g)^t ∙Σ(H.*ΔP) which would give the same answer.

You will find a lot of that explained in the payoff matrix thread: https://www.quantopian.com/posts/the-payoff-matrix

Thanks, will have a good read.

A Trading Strategy's Search For Profits - Part 3

The previous post made the point that you could increase a stock portfolio's performance by slightly increasing a single variable. The given portfolio equation was:

A(t) = A(0) + (1+g)^t ∙n∙u∙PT.

Based on this, in the previous test, g was raised by 1.5%. This time, it will be raised by 2.0%. And since g is part of a compounding factor, it should show its impact all over the strategy's timeline.

Once you have your trading strategy, meaning you have a long term positive edge. There will remain one question. How can I do more of that?

Only actions that will affect the above equation can have an impact. The rest might just as well be cosmetics. A stock trading strategy designer has for mission to maximize the above equation under the constraints of his initial capital and uncertainty.

The following notebook will show how a small incremental change in a single variable can increase one's portfolio performance, just as it did in the previous test. Which was saying: here is the new normal. Now, what can you do with that?

Once a trading strategy designer is done with his/her program, that is it. That is what it does. It has a unique signature: A(t) = A(0) + n∙u∙PT. But here, what is proposed is not to necessarily change the trading strategy, it can stay as it was. But nonetheless, you can request more. As if super-gaming the strategy itself.

Adding https://www.quantopian.com/posts/record-pnl-per-stock. Hover over custom chart.
When stocks spike, can that be detected and close them to capture that profit.

The equation presented: A(t) = A(0) + n∙u∙PT, is derived from a portfolio's payoff matrix. In payoff matrix notation, the given equation is: A(t) = A(0) + Σ(H.∙ΔP). Where the expression: Σ(H.∙ΔP) is a simple element by element multiplication. It could also be viewed as the sum of a vector of n trades: Σ q∙Δp. It is the summation, the end result of all trades taken (profits & losses) over the duration of a portfolio, be it simulated or live.

The payoff matrix has for origin a simple integral: ∫ q(t)∙dp, and must be centuries old. It is in reading Schachermayer's 2000 course notes: Portfolio Optimization in Incomplete Financial Markets, that you will find in his equation (1.1) the above payoff matrix. That too is not new, it has been derived by other researchers before him.

When you decompose the payoff matrix equation above, which in passing I would like to state that it has an equation sign, you will find as a linear expression: Σ q∙Δp. It is only that as a payoff matrix, it represents a complete historical record of a portfolio's trading activity. All its trades, over the entire portfolio duration in a single block of data. Therefore, without even the notion of a doubt, I can categorically state: A(t) = A(0) + Σ(H.∙ΔP), just as those before me.

It would therefore be up to you to prove that the equal sign is wrong. Demonstrate that it is a not equal sign that should be there. On this, I look forward to your presentation.

The expression: A(t) = A(0) + n∙u∙PT, has the same meaning. It gives the exact same answer. And is reduced to just three known portfolio metrics.

Any time you do a portfolio simulation, you start with A(0), your initial capital. You end up with A(t), whatever the applied trading strategy produced. As for n∙u∙PT, it is derived from the payoff matrix itself. [A(t) – A(0)]/n = u∙PT, which will translate to the average profit per trade since the trading strategy will have taken n trades. That it be one trade or a million.

Your average profit per trade is simply derived from the percent profit made on your bet size u. I use fixed trading units which would make u a constant. And PT stands for the average percent profit on a trade unit. Σ(H.∙ΔP)/( n∙u) = PT.

All three of these numbers are given by any portfolio simulation you do. And this gives you an added tool to help you control your portfolio's outcome.

For whatever you want to do to improve on your trading strategy. You can count on only four numbers: A(0), n, u, and PT. There is nothing mysterious, or secret, or in need to buy a book to understand this. Although my book does explain all this in detail (307 pages).

You are the one to supply A(0), and my advice on that one is: make it as big as you can. You are playing a CAGR game and the outcome A(t) is proportional to: A(0)∙(1+r)^t. Make A(0) small, and I think you will be wasting your time and energy playing for peanuts. Note that A(0) is independent of what the market is. It is just an expression for the size of your initial trading account.

The trade unit is something you fix. It is not a mystery number either. Do you want to trade $100 dollars at a time or $10,000 or more. It is up to you, and it is also totally independent of the market, or the market outcome. It is just part of your method of play. The denomination of the chips you put on the table.

PT is your edge, your average percent profit per trade. If PT is less than zero, you lost the game. Since overall, you will lose: n∙u∙(- PT). You have to design a trading strategy with a positive edge: PT > 0. If you can't, then quit while you are still ahead.

We are left with n, the number of trades over the life of a portfolio. There is the mystery, if you want one. I have no means, just as anybody else, to know in advance how many trades a particular trading strategy might do over it lifetime. But, that is why “I” do simulations, to get an approximation for this number.

Most people are looking for how to make PT bigger (forecast it, increase their edge), while I am looking to make A(0), n, u, and PT larger, all at the same time. As long as they will think in terms of PT (trying to predict the price from period to period), they will not see what I am trying to do.

If my trading strategy does 100,000 trades over 20 years, I know the average profit per trade: Σ(H.∙ΔP)/n = u∙PT. And from this, I can deduce another approximation due to the large number of trades, an average number of trades per time unit: n/Δt. From which I can extrapolate an estimate of what a strategy might do going forward.

For instance, the initially posted trading strategy in this thread is scalable. If it is, then it has to respond to: 10∙A(t) = 10∙[A(0) + n∙u∙PT]. As simple as that. If it does not, then the strategy is not 100% scalable. A demonstration of this is easy, you simply do the tests:

The original trading strategy's scalability test:
http://alphapowertrading.com/quantopian/SPY_WVF_Orig_Init_Cap.png

As a conclusion, whatever trading strategy we want to design, if we want to improve upon it, we are left with very simple questions.

How can I make A(0) bigger? How can I increase the number of trades? How can I increase the trade unit? And, how can I increase my profit margin, my long term edge? That is it! That is what will make A(t) bigger, nothing else. One does not need a two week course to figure this out.

No mystery, no hocus pocus, no secret sauce, no the trick is. Just plain math.

What you see in the simulations I've provided is me answering those questions to my satisfaction.

Somebody else might prefer other routes. Like you stated before, you don't like leveraging. Then, don't use any. However, note that I went for 2x leverage on a 3x leveraged ETF portfolio which can easily explain part of the outperformance. This is equivalent to going 6x leverage. But then again, if it would not have worked, the strategy would have “crashed and burned”. Nonetheless, one could have stopped at anytime, closed the account and walk away with whatever the net liquidation value equity line was giving at the time.

This does not change the mission. You still have to maximize the equation: A(t) = A(0) + n∙u∙PT, that is the objective of any portfolio manager and strategy designer.

Personally, I opted to scale profits higher as in: A(t) = A(0) + k∙n∙u∙PT. All I need is k>0, and PT>0. I will let the trading profits finances the added trading which should generate further profits to finance further trades. It is that simple. Somebody else might prefer another solution.

What I say is: anybody, and I do mean anybody, can have their own unique solution to A(t) = A(0) + n∙u∙PT. And even from there, they can continue to improve on their strategy designs by pushing on those three portfolio metric levers as was demonstrated in the three successive tests I presented. What those test results showed is that it can be done. At least, over the testing interval, under those trading conditions, it did.

It really is only a matter of choice. After all, there are only there numbers of interest!

Or...stay with me here...increasing leverage increases performance and drawdown. Why does a scientific paper with formulas, equations, and lots of maths need to be written about this?

Do you do this for every simple minute point? Before I turn on my car let me write some equations about energy, and a research paper.

I'm showing up late to the party here. Perhaps someone can explain the excitement? Has the Q crowd finally found a money-printing machine? In other words, is there any basis for the results above, or are we in the land of over-fitting/speculation/gambling?

There's no money printing here. Just the seductive illusion of backtesting.

I don't know about printing money, but so far my real money account, trading an earlier variant of this algo is up 11% since the beginning of the year and a few additional percent on top of that since the election. It was doing well enough, I gave it more money to "print". Sure, it's a small sample during a favorable time, so time will tell.

My only mod was to set the trading $$ volume at 70% of the target - to reduce leverage to 1. It's an IRA so leverage is verboten in any case.

The other interesting thing is, actual real money results are better than a backtest over the same period. Not sure what to make of that. Luck? Issues in the Q backtester? IB really, really likes me?

I'll have to go run another backtest over the recent period, but I think there was a several percentage point difference, in "real world" favor.

@ John Galt -

So what is the basic formula you are applying? What is the principle? Perhaps you could provide an outline in pseudo-code? I'm just curious why all of the interest here.

Grant. There no printing money here. Thanks for the public your MVP ideas and algo.
As you can see the basic driving force is still your orignal idea of MVP ( Peter's mod). i spent few months research ideas, run and rerun your algo. i saw the algo was doing good and i was quite happy about it. But controlling XIV portion was the pain in the a... and i have no solution for it at that time. Until Peter publish his Spy code that have a portion that he used to control the XIV, and voila i found the idea, so i try to glue his part of algo to MVP and it work.
In the end, thank you all the guys put in some work and make it to the version today. Its no printing money but for the guy like me i will definitely run it in IRA with .8 risk instead of 1 and keep investing ( standard error with .8 leverage is about 9-13%).
im currently have my mod version running in smallest IRA account and it doing quite well.

Hi all, I've been following this thread for a while now and slowly trying to work through the code to really figure out what's going on. I've made fairly good headway but due to my limited financial and mathematical knowledge I'd like to run this by the people following this thread.

From what I understand, there are three trading functions: allocate, allocVol, and allocSpy.

For allocate, we are applying a minimization optimization function based on the normals of the average pct change of a set of securities over a given time period (in this case over the past 17 days). The result of the minimization is used to purchase or sell positions.

The allocVol function is seeking to determine if the WVF is indicating a downward trend for the XIV, and if it cross a threshold then we sell our positions in XIV.

Lastly, the allocSpy function is balancing the portfolio and seeking to find a downward or upward trend in SPY and purchase the bear or bull positions respectively.

Let me know if my assumptions above are correct.

As for my questions, I'm having a hard time figuring out what exactly is going on in these lines of code:

    bnds = []  
    limits = [0,1]  
    for stock in context.stocks:  
        bnds.append(limits)  
    bnds = tuple(tuple(x) for x in bnds)


    cons = ({'type': 'eq', 'fun': lambda x:  np.sum(x)-1.0},  
            {'type': 'ineq', 'fun': lambda x:  np.dot(x,ret_norm)-context.eps})  
    res= scipy.optimize.minimize(variance, context.x1, args=ret,jac=jac_variance, method='SLSQP',constraints=cons,bounds=bnds)  

Questions: What are the constraints and what exactly are they doing for the minimization optimization? What is the result of the minimization function and how is it applied to find the proper allocations? What exactly is bnds = tuple(tuple(x) for x in bnds) doing?

In all my years of strategy development I have never used 3x leveraged ETFs. The initial trading strategy presented (which by the way did not do much) was an opportunity to see if my methods would apply there too. No development time needed. I could simply clone the strategy and go from there.

Even before starting to modify any part of the code, I knew volatility, drawdowns, and beta would be accentuated since 3x leverage ETFs is equivalent to having leveraged your portfolio by 3 times using borrowed money, except that you did not borrow any to do the same job.

My intention was to leverage on top of that by 2 times. Which would make it a 6 times leveraged portfolio. Something that Q mentioned it might do using low beta portfolio strategies.

The initial strategy design had for volatility: 0.16, and – 16.2% for max drawdown. I had also recognized it had a scalable design. Leveraging by 2x should double these numbers, or close to it.

I viewed it as stable grounds enough to push the machine. Even in that state, the original program had to answer to the equation: A(t) = A(0) + n∙u∙PT. That was its signature.

Here is the original trading strategy's signature with its $10,000 Initial Capital:
http://alphapowertrading.com/quantopian/SPY_WVF_Returns_Orig_10k.png

You want to improve on an existing trading strategy, you do not have many choices. There are only 4 numbers that matter. They are all part of the above equation.

The first thing you can do is increase the initial stake A(0).

This requires absolutely no change in code. From outside the program you simply increase the initial trading capital. It will give you if the trading strategy is scalable or not. In this case, it is by design. Therefore, for me, it passed its first test even before doing them. Those results where presented in my previous post.

Since the original trading strategy could support higher stakes with ease, I intended to use its last test as basis for going forward. This had transformed the strategy's equation into 10 times the original. Giving: 10∙A(t) = 10∙[A(0) + n∙u∙PT], where 10 times more capital would be used to generate 10 times more in net portfolio value.

I could start doing modifications to the program, knowing that it was scalable, and make the last test its new normal, giving it for equation: A(t) = A(0) + n∙u∙PT, and go from there. My program modifications were tested using $1 million as initial capital.

You design trading strategies using $10k as initial capital, and do not test if it is scalable, or do the test and find out that it is not scalable, then your strategy is not worth much. A stock portfolio is more, much more than having a penny stock trading account.

I know that if I want to increase performance, all I have is: A(t) = A(0) + n∙u∙PT. Whatever I do will have to impact on those variables. Otherwise, I am missing the boat. Cosmetic changes to a program will have no impact on that equation, and therefore no impact on the outcome.

The first step which was to ante up 10 times the initial stake was easily passed. There is no mystery there, unless someone (not to give names) comes out with 42 as an answer! But, that post was deleted. I can understand why.

What was presented in the three level test is executable. The numbers might appear high, but those were the numbers generated by the program modifications. I still have not read all the original program code. But now, I have some motivation to do so. I too want to understand better what is at play, there might be code snippets in there that I could transfer to my code library.

I will be back with some explanation on why it worked. I did not see any surprises in the outcome. Saying that what you see in those 3 tests is what I got. It is what came out of the simulations as done on the Quantopian servers using their data, the same data that was used by the original program. All I did was find pressure points and then apply my methodology to the problem. It transformed the above equation into: A(t) = A(0) + (1 + g)^t ∙n∙u∙PT. And that is what boosted the outcome, as it should.

But this, anybody can do. You can use your own trading strategies, as basis, and apply the same principles. Using the above formula, it will boost your own performance.

I would point out that my simulations are operating at the equivalent of 6 times leverage.

Somehow, it had to show!

Here is the issues I had with the algo (I skimmed the transaction logs of the backtest to try to identify any issues).

  1. The code seems to be composed of cut and paste from various other pieces. This doesn't automatically mean it is bad, but it does make me question its reliability.
  2. There are periods where the algo will hold over 60% XIV. I consider this super risky - if I wanted to take this level of risk, I could achieve even higher returns. I consider this unacceptable risk.
  3. There are periods where the algo will hold only TLT + TMF. Why both? Because of the ducktape programming highlighted in #1.
  4. Since we dont have backtest data for a recession, I wonder if this would blow up during a recession?

@qqt, i think you have to start from this Minimum Variance w/constraint and works your way up.

Mohammad, valid points made,
i totally agreed with point #1 , For me personally, for a non-programmer like me. This is what i did,i spent time quite decent amount time to understand the math behind it and i like the idea, So first think i need a prototype and none of other platform that lets me prototype this is the cause of reason 1. But if it works with minimum error then no issue at all. And i think it need a test run for 2 years of real live account to verify the trustworthy and if it delivers a decent returns (not bazooka return like some of latest algo) Then the users have to decide that if want to use long term, if yes, then it needs to have a professional grade code restructure or brand new.
For me, since I love the idea and have spent enough time with it and i see it works the way i want it to. therefore, my confident level is high so i paid a Amibroker programmer to code a new program based on this idea for me. (Trust but verify is the key here).
All the rest of the points, for point #3, latest algo use TLT and TMF, i like to use SPY allocation to control TMF portion from MVP allocation funtion, but yeah TLT+TMF combo works ( cheaper than TLT and lev more return ) But have to be careful here.
Point #2 and #4, if you stand as long term investor then its maybe scary but if short term trader or even swing trader, then i dont see a problem at all.
(also allocation is every week + a protective Vol allocation will kick in if any thing change => still worry ?). In the end, algo trading is not plug in and forgot (Sometime people forgot this point) it should be monitoring and adjusting the parameters when see fit.
Im a trader, im looking at 5 years range not 20-30. if it works then good, if market changees and the strategy no longer relevant than more on to the next 1.

@ qqt -

The code minimizes the variance, subject to constraints. Getting scipy.optimize.minimize to work is a bit of a beast. It is looking for a solution vector within the bounds 0 to 1 for each element. The first constraint is just the normalization of the vector; it must sum to 1.0. The second constraint is setting a lower limit on the trailing mean return, volatility normalized. The optimizer then attempts to minimize the variance in the returns subject to the two constraints and staying within the bounds.

One note here is that CVXPY is now available. It might be better, if the problem conforms to the CVXPY rules.

I'd also note that depending on its flexibility, once it is released fully, the new Q optimization API might work here (although under the hood, it is just CVXPY, as I understand).

Also, note that limits can be set, on a per ETF basis, when using an optimizer. So, if XIV is getting too much weight, for example, it can be limited.

Overall, I'm still befuddled as to the basic idea of this thread, if there is one. For one thing, is there a set of ETFs we are dealing with? And why? I see a lot of different symbols floating around. Long only? Long-short?

I think I achieve my goal which was to get greater or equal returns with the same or lower drawdown, everything else going on here is mental gymnastic with 25%-50% DD

It's interesting to compare the original algo in the first message with the later versions. Although the later versions appear to perform better they aren't as coherent to me.

The original has two functions: allocVOL() and allocateSPY().

  • allocVOL() is scheduled everyday after market open and trades between XIV (inverse VIX ETF) and VXX (VIX ETF).
  • allocSPY() is scheduled everyday before market close and trades in and out of SPY.

There are a number of later versions. Most of them seem to have 4 functions: allocate(), trade(), allocVOL(), and allocateSPY().

  • allocate() is scheduled everyday 60 minutes after market open. However, the values calculated in allocate() are only used by trade() which is scheduled once a week.
  • trade() trades the securities in context.stocks, which, depending on the version of the algo, includes XIV, TLT (long term treasuries), and stock indexes.
  • allocateVOL() is scheduled everyday before market close and now only closes XIV when WVF crosses below a threshold.
  • allocateSPY() is scheduled everyday after market open and trades between SPY and TLT.

I find it confusing the later versions have multiple functions trading the same securities with different rules and on different timeframes. But unless the backtesting isn't valid it all seems to work quite well with good returns and low DD.

Grant, you developed an interesting and remarkable strategy using CVXPY. I looked at the notebook your referenced on the optimization API. Found it very interesting. Since I value your opinion, would you say it is now the way to go? Even preferable to SLSQP?

Hi Guy,

I can't say for sure. My understanding is that CVXPY is the way to go (or the Q optimization API), if the problem can be formulated properly.

Tried to deploy a couple of the algorithms above for live trading as a test, but I get a error telling me that there was a problem loading my live algorithm. Any ideas?

Thanks in advance.

@Pherroz you should post the backtest here

I didn't realize this but the base currency in the account needs to be USD, which was why I was receiving an error.

Thanks guys.

The following is my post-analysis of this trading strategy. It might interest some.

http://alphapowertrading.com/index.php/papers/247-the-leveraged-leveraged-portfolio

One thing I'm very skeptical about (for the recent algorithms with astronomical returns) is that it depends heavily on the XIV, a financial instrument that's only been around for about 5-6 years. I'm not trying to yell conspiracy or anything, but is the VIX a good indicator of what it's really trying to measure? And is the XIV a reliable tracker of its inverse? Is there any possibilities of failure with either the VIX or XIV? Or any of the volatility indicators for that matter? This is more coming out of a lack of understanding of how the prices of these securities are set than skepticism of the actual code.

@qqt dude if XIV fails you will have to worry about more serious problems than your money.

You do simulations to find out: what if? It is a simple question. But, it is also the sole purpose of having portfolio simulators.

If in the 90's you did a simulation on Apple, you might have concluded then that it was going down the drain. If you do it today, you might go: what a great company. This to say that even if you did a simulation, it was over past data, history.

What my simulations showed is that one could push on his/her machine for some 2,222 days without seeing it break down, and this at 6 times leverage.

Now, I could not have known the strategy could do it, if I had not done the simulation. And the simulation said: it is feasible. At least, it could have been done, in the past.

What should one retain from this? Everyone will have a different view. Some will not be bothered considering the results since they do not leverage, whatever the scenario. Others will wonder if the added risk was worth the effort. Others will just express doubts and continue on their way. It is not up to me to decide those things. It is up to each one of us. What we like and don't. How far do we want to go, and how fast?

But, whatever, it remains a CAGR game. And that is how you keep the score.

For the added risk which was shown to be about of the same order of a none leveraged portfolio, you had this trading strategy that could literally fly. It is understandable, it “flew” at 6 times leverage for the price of a 2 times scenario.

My view is as said before. At least, it showed that it was possible over past data.

Correct me if I'm wrong, but it seems that the portfolio will go long xiv when vxx wvf goes over a threshold and makes no attempt to guess a peak. How do you know the wvf won't keep climbing like it would in 2008? Are we banking on the fact that xiv only makes up 30% of the portfolio and doesn't rebalance so you'd only lose 30% at most (not accounting spy allocation losses)?

@Kory, Could you describe the EMA? I've tried a couple variations all with horrible results. If you describe it, I will try to code it up!

Thanks!

John

@John, My buddy Jacob and I have already coded it. Please see the algo attached for the 100-day WVF indicator with 2 of its EMAs (Smoothed WVF). One is a 10-day EMA of the WVF and the other is a 30-day EMA. Using the EMA as a variable trigger is much better than assigning a fixed value trigger.

I originally coded this algo in MultiCharts with PowerLanguage. This is the strategy I originally presented to Cesar Alvarez at our NWTTA Seattle Meetup and it is also the one that inspired this whole thread along with Cesar's. Unlike a lot of the recent permutations here in this thread, this one uses absolutely no leverage or leveraged funds and trades only XIV and TLT. Here's what it looks like in MultiCharts (please note that for the Quantopian version, I made the algo hide in TLT instead of cash when not allocated to XIV):

One way to hedge with these strategies is buy some OTM VIX or VXX calls. With the amount of profit these strategies generate from mostly shorting volatility, one could certainly spare 20% of the profits to buy some long VIX calls to hedge. You'd sleep much better at night.

That does better with default slippage and commissions, donno why.
Meanwhile $34k unaccounted-for negative cash. Return is 280.

2017-02-21 13:00 pvr:206 INFO PvR 0.1815 %/day cagr 0.5 Portfolio value 134686
2017-02-21 13:00 pvr:207 INFO Profited 124686 on 44513 activated/transacted for PvR of 280.1%
2017-02-21 13:00 pvr:208 INFO QRet 1246.86 PvR 280.11 CshLw -34513 MxLv 1.78 RskHi 44513 MxShrt 0
2017-02-21 13:00 pvr:293 INFO 2011-01-04 to 2017-02-21 $10000 2017-02-22 20:47 US/Eastern
Runtime 0 hr 7.2 min

Kory, your strategy can scale up pretty good too. Impressive.

@Kory, Thanks so much! I was thinking of trying to also go long vxx when something like the opposite conditions exist. I was thinking this would drive the beta towards 0 and help in bad times when volatility is going crazy. Have you tried something like that already?

Is ok, real returns 1160%, the 100x starting capital makes for lower percentage of negative cash.

@Guy, Thank you. It trades very infrequently so that's why it scales up well.

@John, Yes, I have 2 algos that can go long VXX when not in XIV. The problem is if I chase VXX upside, it doesn't work as well as the algos that go long XIV only during a bull market (obviously). Yes, it can be very profitable and has zero or negative beta during the 20% of those times the VIX spikes but you would end up losing most of that gains during the other 80% of the times when the VIX declines.

In my experience, playing long VXX using mean-reversion techniques on the daily time interval yields the best risk-adjusted results. To play long VXX on smaller time intervals, you need to use momentum techniques.

Kory, your trading strategy has for equation: n∙u∙PT. And your methodology has fixed n and PT in time. Doing so, you made your trading strategy scalable.

The trading unit being fixed at 100% of equity, it becomes the scaling factor. Your trading strategy is therefore scalable by design, responding to its bet size.

My test was a verification that you did in fact do the equivalent of fixing n and PT. But that does not diminish your trading strategy, in the past it would have done what you have shown under those conditions.

Not that many trading strategies operate at 50% CAGRs and above. And scaling up might be one way of taking full advantage of such a strategy. I find it worth investigating.

@Guy,

Thanks for your analysis. I personally find the best method to trade volatility is to have a portfolio of 5-10 momentum/trend-following algos that trade XIV only. I've found strategies that attempt to play long VXX will sooner or later suffer 50% or more Max DD. Additionally, you need to allocate 2-5% of your portfolio to hedges against Black Swans like buying OTM SVXY puts or VIX calls or long VIX futures or even VXX calls.

I think that traders of these strategies need to embrace the fact that they will have to endure a 20-50% Max DD at some point even during a bull market, regardless of how well they optimize their strategies. But eventually the strategies will recover and make new highs as VIX peaks out and mean-reverts downward. Plus if you hedge against Black Swan, you likely won't lose in the long run and sleep better at night. UNTIL the bear market arrives. Arguably, at that point, traders only simply have to switch their XIV-only momentum algos to trade VXX-only or both XIV/VXX.

Just my 2 cents

Kory, I agree. And as you said, one can always buy protection against outliers. Even override manually if needed or desired.

Nonetheless, your strategy has a welcomed property which I appreciate: scalability.

I will go on the assumption that what I see is not curve fitted. I have to say I did not look that closely at all the details of this trading strategy. Only performed some tests on how it behaved to see if I could be interested.

Assuming that one is ready to buy protection for the just in case, there are a few things one can do with that strategy. I do not think a portfolio manager would be interested in a trading system made to operate with $10k. I too would consider it a total waste of time and resources. However, a scalable trading strategy merits investigation to determine its limits.

The governing equation of a scalable strategy is: (1+L)∙k∙A(0)∙(1+r)^t. To illustrate the point, I did the following simulations:

http://alphapowertrading.com/quantopian//Kory_SPY_WVF.png

The first panel shows the strategy being scalable 100%. Portfolio metrics are maintained for k ranging from 10 to 1,000.

Panel two shows the use of leverage on the $1 million scenario where we can see volatility, beta, and drawdowns increased. Just as we can see the CAGR and total return increase as well.

The strategy being scalable, it would also support panel three with its $10 million scenario. Portfolio metrics were about the same as in panel two with the same CAGRs.

Note that I have not changed any of the code, except for k and L which are factors set even before undergoing a simulation.

My question would be: which scenario would a portfolio manager prefer knowing that protection can be bought and leverage easily paid for?

With this, I will certainly study your strategy further and see how it works. And attempt to reduce its volatility and drawdowns. My fear is that it might be somewhat curve fitted.

Quantopian is looking for strategies with low beta, low volatility and low drawdowns. It is a reasonable quest.

But, and it is a major one, there are no high returns available with this kind of strategy unless you are ready to apply leverage, up to six times as in Q's case.

The math of the proposition do not seem to comply with the objective which should be the highest possible CAGR.

By leveraging, one increases volatility, beta, and drawdown, even if you start with low figures. MPT will emphasize that higher returns imply higher risk.

There is a cost for leveraging. It is only if the added return is higher than the leveraging costs that it might be worth the effort.

If you increase the CAGR by 3% and pay 3% for leverage, you are not ahead. You are just working for your broker! And if you want a leverage factor of 6, you will need to pay interests on 5 times your ongoing equity. This could be a major strain on a low profitability scenario even if the output would tend to 6 times.

Your CAGR rate has to compensate for these charges. Otherwise, over time, you will be paying to underperform.

I can easily imagine that the $250 million put at Q's disposal by Point72 could be distributed to several low drawdown strategies. Maintaining low drawdowns has a cost. You might have a smoother equity curve, but it translates into lower average returns, as if underplaying one's hand.

Consider this scenario. Take 10% of the $250 million allocation and put it on Kory's strategy. It would result into something similar to what was presented in the third panel of the table from my previous post.

One could even estimate the outcome with the formula provided. Make leverage = 1.5, initial stake = $ 25 million. The output: 1.5∙2.5∙10,000,000∙(1+74.23%)^t = $ 754,675,000.

The simulation came in at: $ 754,650,000. Close enough! The impact of the margin cost is easy to estimate as well: 1.5∙2.5∙10,000,000∙(1+74.23% – 3.00%)^t = $ 678,350,975. Total margin cost: $76,299,025. Not a trivial sum after all.

The following table illustrates this:

http://alphapowertrading.com/quantopian/Kory_SPY_WVF_Extended.png

I do not have portfolio metrics for a Quantopian acceptable scenario, so I used expected CAGRs instead. But, Q should be able to fill in the blanks.

My preliminary analysis is given in the above table. The columns of interest are: the effective rate of return (CAGR after leveraging costs), total cost of leverage, and total net return.

Whatever game one wants to play, one should study the impact of the trading rules being put in motion before even applying them in real life. Otherwise, you might have some surprises going forward, and some of them not beneficial to your portfolio.

I remain with the same caveat as expressed in my last post. I have not completed my homework on this trading strategy.

1.5∙2.5∙10,000,000∙(1+74.23% – 3.00%)^t = $ 678,350,975. Total margin cost: $76,299,025. Not a trivial sum after all.

Can total margin costs be added to the custom chart? I'd like to see how that can be done in code.

@Kory,

Could your algo be live traded on Robinhood as is?

Or IB, or both....

@Alex,

Should be. Although officially I do not recommend that you do it.

Here is a small sample of a stop loss level test with this trading strategy:

http://alphapowertrading.com/quantopian/SPY_WVF_Orig_Stops_Lev.png

I used the $1 million initial capital scenario as base of comparison with its 3% stop loss and no leverage. This is Kory's program version with $1M, to which is applied 1.5 leverage and increasing stop loss percentages.

As the stop loss increases, the total net return increases up to a level, tappers off and then declines. In the process the strategy's beta, volatility and drawdown increase as they should. Also, the alpha, Sharpe and Sortino ratios gradually fall as the stop loss increases. The explanation for this behavior is understandable and expected. You increase the stop loss, you simply increase portfolio volatility.

By increasing the stop loss, we accept more price variations since the stop loss is serving as a limiting factor to acceptable price variability. It will result in less whipsaws but higher drawdowns as if by definition. The stop loss serves as protection by limiting drawdowns.

This set of tests demonstrate the trading strategy needs a stop loss. Not too small as it will eat on potential profits, yet, not too large as not to go overboard, and not matter. A kind of goldilocks setting in the range of 3% to 6%.

I started digging into the code and I don't like some of the stuff I see. Without changing the strategy's trading logic you can force it to do more, even much more. The strategy's behavior changes as you increase the pressure.

And this does raise a question or two. If you don't like a strategy's behavior and it does make money, should you excuse it, play it anyway, or explore further? You have a strategy that is designed to do little, meaning having little going for it. But, by setting your controls at higher levels you can force it to produce more. Will you accept the higher risk and higher volatility? Would you not investigate further to know more before committing any resources to it?

Dear all
Thank you for me for showing your strategy. Everything is impressive. I try to backtest these strategy in my local environment, I can not do it. I download histrical finance data from yahoo finance (i.e. VXX https://www.google.co.jp/search?q=VXX&oq=VXX&aqs=chrome..69i57j69i60l3j69i59l2.4628j0j7&sourceid=chrome&ie=UTF-8 ) , but there are differences between yahoofinance data and Quantopian data. Could you tell me how to solve these problems?

Here is another series of tests done on this trading strategy using Kory's version.

We can change some of the strategy's underlying assumptions which will have an impact on performance. As you increase or decrease the pressure, it will change the strategy's behavior.

It was previously demonstrated this strategy was scalable. This can be seen in the first 3 lines of the table below where the initial stake rises while the CAGR remains stable; the same goes for its portfolio metrics.

http://alphapowertrading.com/quantopian/Kory_SPY_WVF_other_tests.png

In previous tests, leverage up to 1.95 was used. Most of the present tests were performed using 1.50 while changing other parameters to see the general behavior and their impact on portfolio metrics, including estimated leveraging costs.

It was found that a 5% stop loss might be a reasonable compromise (between 3 and 6%) for this trading strategy. It reduced the number of whipsaws and generated higher CAGRs. These new tests were performed using the 5% stop loss.

The window length (moving average crossover) was changed from 10 : 30 to 7 : 34. Not a major change, a trivial one at best. But, it enabled the strategy to react a little bit earlier and stay in position a little longer. This increased performance: less whipsaws, catching more of the drift. These are the same kind of steps one might take in an optimization process. These particular changes can still be viewed as reasonable numbers. Nonetheless, they are arrived at after the fact. However, they are not that different from the originally used numbers.

What I found troubling is the strategy's trading behavior as you increase trading capital. The problem seems partially related to Quantopian's 2.5% of the available volume trading rule when there are cash reserves.

With parameter settings similar to line 3 of the above table produced these stats:

http://alphapowertrading.com/quantopian/Kory_SPY_WVF_other_tests_L3.png

When I looked closer, I could see a couple of big trades. And then the rest were for single digit trades, most often below 5 shares at a time (like one or two shares at a time). The chart below was not an aberration. It would do this all day, almost all trading days.

http://alphapowertrading.com/quantopian/Kory_SPY_WVF_other_tests_L3_trades.png

It somehow renders the strategy impractical. And it raised questions. Was the 2.5% volume trading rule appropriate on high liquidity stocks when playing with higher stakes? Which is the first thing I suspected. A surprising thing is that if I changed the trading hour to 11h30 instead of the original 9h40, then it behaved as it should; a few trades each day it traded. But the hour was not to be the problem.

An easy way in Quantopian to reduce general market exposure is to reduce the leverage to below 1.0. Lowering leverage is just another tool to control a trading program. However, I really do not want the trading behavior displayed in the above chart. Reducing leverage to below 1.0 is a simple way to reduce not only volatility but also drawdowns.

The reason for the above behavior seems related to the use a leverage below 1.0. Doing so leaves cash reserves that the program will try to use when rebalancing on small amounts. Thereby generating the one and two share trades. The program should not behave that way, but it does. It appears as if trading on market noise. Or, maybe these trades are just due to parameter rounding.

Either way, I do not see this as acceptable since the program does tens of thousands of such trades (some tests showed >100,000 such trades). It is worse if you go for a 1.5 leverage. Low digit trades can exceed 400,000!

For me, this is unsatisfactory. It questions the reliability of test results or the underlying methodology. And with these doubts in place, it forces me to verify all the time if the trading behavior is correct or not, and this for each trade. That is too much. I should not have to question this. Is this what I want my program to do? Do I have a reasonable assessment of the situation? Presently, it is as if saying: not ready for prime time.

I understand the Quantopian program is still in beta, but this can be a serious problem.

Hi guys,

Just wanted to give you all a heads up on another are of concern. Namely how reliable is the XIV data for the backtested period (to 2011)
xiv_graph Look at some trades the program does in simulation mode. It is trading aournd $169.85 per share on 2011/05/03. On this same date XIV was actually trading around $14. I verified the XIV figures on both Yahoo and Stockcharts.com

So what is going on? Some kind of reverse split at 12:1 ? This makes me uneasy about using the XIV/VXX data in the algorithm.

Can somebody offer a reassuring explanation?

An issue that i have with this strategy hence why i stopped trading it live, is that it relies heavily on bond for it's returns, which given a rising rate environment it will effect returns, also if you move to cash, the drawdowns seem to become highly correlated with the market which will increase DD significantly more. Either way it's still tradable just expect much lower returns, as TLT/TMF will be significant drags on performance. Unless somebody does a backtest on how the yields offset the price losses of bonds.

Hello d'Adesky,
You are correct, you are looking at a split adjusted price. Dividends on the other hand get put into your cash. See here for more clarification: https://www.quantopian.com/posts/the-pipeline-api-dividends-and-splits-what-you-need-to-know
Have a good day.

Hello,

I put this together from some bits and pieces from a few different places. It suffers from a pretty big hit in 2015 and 2016 mainly because this is either all in XIV or between TLT and TMF. It's not very diverse but puts up some interesting numbers.

Thanks to everyone who contributed to this algorithm. I made a change to one of the algorithms in this post to manage the downtrend of bonds. Basically added a SMA 200 condition (TMF price above SMA 200[TMF]) to place TMF trades. It did not bring down the returns too much, but it gives you protection when bonds crash. Basically I have added the below condition in place_order function. I am not sure whether it is the right way.

if stock == context.bearish_stock:  
    price_hist = data.history(context.bearish_stock, 'price', 200, '1d')  
    context.ma_tmf=price_hist.mean()  
    context.cp_tmf=data.current(context.bearish_stock, "price")  
    if context.cp_tmf>context.ma_tmf:  
        order_target_percent(stock, percent)  
else:  
    order_target_percent(stock, percent)

I would like to receive feedback from the experts. Thanks for your help.

New to the community here, so sorry if this is a newbie question, but......is there a Robin Hood friendly version of this algo?

this should be RH friendly as I trade this on an IB cash account. You might want to put some retry logic into it as RH seems to cancel orders without reason.

@Ramesh

You algo seems interessting. But the backtesting takes much longer than the others. Maybe it's because there are lot of calculations in your algo. And could you please explain a little bit about your algo idea?

Cheers

Thomas

"Peter Bakker Yesterday: this should be RH friendly as I trade this on an IB cash account. You might want to put some retry logic into it as RH seems to cancel orders without reason."

Peter, is there an example of order retrying logic somewhere? It would save me the time to code from scratch...

@takis You may need to modify a bit to get it to fit with the algo in question, but here is some code I've used in the past.

def retry_cancelled_order(context, data):  
    for order_id in context.todays_orders[:]:  
        original_order = get_order(order_id)  
        if original_order and original_order.status == 2 :  
            retry_id = order(  
                original_order.sid,  
                original_order.amount,  
                style=LimitOrder(original_order.limit)  
                )  
            log.info('order for %i shares of %s cancelled - retrying'  
                         % (original_order.amount, original_order.sid))  
            context.todays_orders.remove(original_order)  
            context.todays_orders.append(retry_id)  

thank you Chris; this will save me a bunch of time.

Just wanted to point out that:
context.todays_orders.remove(original_order)
should probably be:
context.todays_orders.remove(order_id)
... Apparently some actual debugging, not by me, but by Peter Bakker in a different thread figured this out. "I debugged it as it failed and it confuses the object for the ID"

As some one who is learning quantopian high frequency trading im really excited to lookok at your algorithm thank you

Hi All
If I want to implement a delay of (5 seconds) between trades. What should I add to this algo?
example
buy 100 QQQ
DELAY 5 SECONDS
BUY 100 TMF....

This algo suffered a big drawdown on Wednsday. This means also, in case of 2008, this algo will run to hell.

The big drawdown on Wednsday is at the other side a good change to refine the alog.

It looks like Yahoo and CBOE Vix data has gone out of sync. Yahoo has got stuck at a historic value...

Yahoo VIX vs CBOE Vix  
...... 10.4 10.4
...... 10.4 10.42
...... 10.4 10.65
18/May 10.4 15.59  

I've been running the two systems parallel to check for discrepancies before moving over to CBOE before end June.

The Yahoo feed has been discontinued - I'd use the CBOE Vix now.

Listening and learning from others I had improved this live trading algo; it does not have the smoothing mentioned in the posts above, I think it should. I have attached the algo that I traded till 2017-05-11 with 25K starting balance and with a return of 41.5% in 7 months. I stopped it as another volatility algo I developed in the meantime is a bit more stable. Anyway, I thought I share it as this algo made some substantial money but it indeed it did loose some cash last Wednesday.... part of the game! I'm not trading this anymore but I do think it has some room for improvement; I might develop it further in the future and put it live again once I can combine futures and equities in the equity calendar.

NB1: the my_assigned_weights() function that calls the optimizer is redundant and only there for fun and games. It is superseded by a momentum based function that orders the side stocks

NB2: most of the time this algo has a leverage of around 0.5, hence the alpha could be a lot higher. However I use the cash component in this algo to limit the DD, and therefore limit the the return. As I trade real cash I tend to use my cash to define exposure, lever it up to the max and you get get more out of this algo but also a few ulcers and heart attacks...

Peter, I'm not sure if you're aware but your algo is not executing the side stock ordering code because context.n always = 0 and returns because you are not running the my_assigned_weights() fucntion. The watch function doesn't seem to trigger at all either with just XIV trading, the threshold might be too high. It seemed odd to me, because I was able to remove 90% of the code in your algo and it still worked identically.

I left this in as a lot of people want to order the side stocks with the optimizer. I keep my algo's as single minded as possible so I only order the VIX releated elements. The order_side_stocks() get triggered but as I dont run the my_assigned_weights() function it is void. Add the my_assigned_weights() in and you'll have the side stocks. Watch should be triggered..... I'll check

cleaner code

why is the first trade with 1 leverage?

no real reason, just wanted it to trade. Best to use the default indeed, context.maxBuy

Hello All......grateful to have found this community.

Question...be nice please. I can program about anything using Metastock, however, when it comes to the code you guys are using it becomes foreign to me.
Regarding the source code here....has anyone translated it to Metastock already ? If not, I would be happy to do so and share of course if someone would kindly tell me what the source code says/does in laymans terms. I suppose I could fumble around with it for weeks, but just thought I'd ask first.

Regards

After years of working with Quantopian, I decided to use this strategy to invest my own money, finally, private investments are going to be discontinued, so It seems like a good opportunity to share my best version.
Now, nothing makes sense, if someone port it somewhere, let me know.

Thanks Martin, Attached a version that keeps the leverage in check and that has some basic trading guard rails

When I back tested the original also from 2012 to present, I got a beta of 1.06. So, the returns are great but with greater beta,

thanks
-kaaml

@Peter, you could push on that strategy a little bit more. For instance, by loosening its allocation constraints in the following code snippet:

   for i, stock in enumerate(context.stocks):  
        p = allocation[i] * 0.6 * context.leverage  
        rebalance(context, data, stock, 0 if p < 0.05 else p)

Here are my tests results just changing those two parameters:

Allocation factors:

As the allocation increases and more trades are allowed, total return increases as well as max drawdown. Alpha increased, beta not so much. Sharpe and Sortino ratios remained practically the same. Volatility increased a little.

For the 5% added drawdown, you get $2M more. Is the exchange acceptable?

Using different ETFs and optimising since inception.
This delivers a Sharpe ratio of over 2.2 .
Nice algorithm @Peter

All these strategies are useless with VIX basically at the lowest it's ever been in history since 2011, if you run backtests from Vix data going back to 2007 with simulated data, you will see they have like 60%+ drawdowns, if not blow out your whole account completely. Be weary with real money, at record low vol.

Elsid, very good point - the market hasn't seen these types of VIX lows before. A spike up to even 18-20 is going to cause a huge drawdown

True, if you only would have this algo running as your only algo. The algo tries to profit from the movement up and down which will always be there as the curve changes al the time. Personally I have a maximum of 20-30% of my money in this type of algo's (and I have 3 versions ranging from very agressive to not so agressive). The hardest thing is an algo that reliable pickup long Vol when its time to do so. As the VXX and UVXY both have a natural slope to zero one has to get the timing right. This specific algo does not do this, others do but they are more complex and risky.

BTW, the orginal algo had a rather naive switching condition:


    if(WVF[-2] < WFV_limit and WVF[-1] >= WFV_limit):  
        order_target_percent(vxx, 0.00)  
        order_target_percent(xiv, 0.30)  
    #Sell position when WVF crosses under 14  
    elif(WVF[-2] > WFV_limit and WVF[-1] <= WFV_limit):  
        order_target_percent(xiv, 0.00)  
        order_target_percent(vxx, 0.10)

So nobody talked about this strategy again after 2018. I guess it's because of the XIV implosion/meltdown, right? Has someone used alternative ETF/ETN with this strategy lately?

Yes, I'm trading a variant of this strategy live... I have added a few factors and guardrails but it is doing fine

Picking one or two examples from random, there are some pretty awful performances here. Always easy to see the errors after the event!

Here is one of the better examples but it has still been a disappointing and volatile few years since August 26th 2017 the date this particular version was published. The algorithm is flat since that date with far higher draw downs than were seen in the original back test. Given some years of experience in this game, I can not say that I am entirely surprised.

Good to show @zenothestoic... Yes, this specific strategy has quite a few problems. If people want to use it for live trading: don't.
The main issue is that it looks at VXX, as VXX is a derivative you should be looking at VX and the VX structure to make decisions. The second is that the values used are optimised, 14, 50 etc.... Now I wouldn't do that unless you have a clear bounded factor, but even then it should be dynamic in some way.
Finally, when I wrote this algo, I took a few things from others and combined it creatively. I learned a lot from that, about principles and stuff, but I really didn't understand the microstructure of the VIX, the VX, VXX and how it behaves in time of stress. I trade this stuff now comfortably for a few years now and I still didn't have a drawdown beyond 16% with solid profit in most months.... but that just means I have been lucky ;)

I am sure you have done your homework Peter and from my very detailed examination of all aspects of the VIX a few years back, there certainly ought to be profit to be had if you can time it or otherwise protect yourself. I had protected my position at the time of that short VIX ETF going bust with options. And I got out with a profit. I didn't re-visit this trade after that. I ought to have done.

I'm not sure about luck. There is definitely a market structure here to be taken advantage of.