Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
New Strategy - Presenting the “Quality Companies in an Uptrend” Model

We wanted to share with the Quantopian community an algorithm named “Quality Companies in an Uptrend”.

This non-optimized, long-only strategy has produced returns of 18.0% using the Q500US universe since 2003 with a Sharpe Ratio of 1.05 and 12% of Alpha and a Beta of 0.53.

We’d appreciate your input and feedback on the strategy.

Combining Quality With Momentum

This is a “quantamental” strategy, combining both fundamental factors (in this case, the quality factor) with technical factors (in this case, the cross-sectional momentum factor) in a quantitative, rules-based way.

The idea of this strategy is to first identify high-quality companies then tactically rotate into the high-quality companies with the best momentum.

What is Quality?

The characteristics of “quality” companies are rather broad. Quality is typically defined as companies that have some combination of:

  • stable earnings
  • strong balance sheets (low debt)
  • high profitability
  • high earnings growth
  • high margins.

How Will We Measure Quality?

For our strategy, we focus on companies with a high return on equity (ROE) ratio.

ROE is calculated by dividing the net income of a company by the average shareholder equity. Higher ROE companies indicate higher quality stocks. High ROE companies have historically produced strong returns.

Rules for The “Quality Companies in an Uptrend” Strategy:

  1. Universe = Q500US
  2. Quality (ROE) Filter. We then take the 50 stocks (top decile) with the highest ROE. This is our quality screen, we are now left with 50 high-
    quality stocks.
  3. Quality Stocks With Strong Momentum. We then buy the 20 stocks (of our 50 quality stocks) with the strongest relative momentum, skipping the last 10 days (to account for mean reversion over this shorter time frame).
  4. Trend Following Regime Filter. We only enter new positions if the trailing 6-month total return for the S&P 500 is positive. This is measured by the trailing 6-month total return of “SPY”.
  5. This strategy is rebalanced once a month, at the end of the month. We sell any stocks we currently hold that are no longer in our high ROE/high momentum list and replace them with stocks that have since made the list. We only enter new long positions if the trend-following regime filter is passed (SPY’s 6-month momentum is positive).
  6. Any cash not allocated to stocks gets allocated the IEF (7-10yr US Treasuries)

Potential Improvements?

What potential improvements do you think we can add to this strategy?

Some of our ideas include:

  • A composite to measure Quality, not just ROE
  • Adding a value component
  • Another way to measure momentum?
  • A better/different trend following filter?

We’d love to see what you guys come up with. Given the simple nature of this strategy, the performance is strong over the last 16+ years and should provide a good base for further testing.

Christopher Cain, CMT & Larry Connors
Connors Research LLC

443 responses

Wow, what a great strategy! Thank you for sharing this! I've been meaning to create something similar for trading in my own account (paper only initially).

I made the below quick modifications:

  1. Use ROIC instead of ROE, as ROIC includes debt as well (high returns on equity with little leverage is high quality in my book)
  2. Added low ltd to equity ranking to the 'quality ranking' as again, low leverage is high quality in my book. This results in lower total returns, but also lower volatility and lower drawdowns, so a slightly higher Sharpe Ratio.
  3. Also added two 'value' metrics and added to the ranking. I prefer to buy 'quality' when it's on sale, but you can easily comment this out.
  4. Changed rebalance to 6 days before month end. My 'hypothesis' is that most people get paid around this time (25th) so more money might be flowing into the market at this time, pushing it up (I have nothing to back this up, just my theory).

Will try to improve it further when I have more time.

Another quality rank I would look at possibly add is consistently high ROIC over say the last 5 years. E.g. (mean 5yr ROIC) / (std_dev of 5yr ROIC).

Interesting!

Could someone please help me understand the line number 58 & 59? I am new to coding and I can only understand that Line number 58 creates a dataframe for daily close price for 140 days. and Line number 59 calculates the 126 days return and gives that value. Is that correct? I dont understand what iloc[-1 does and why it is required?

Is my understanding correct that the trend filter basically means - if 126 days return are positive then true if not then false?

In good times smaller companies will grow faster than the big companies, in bad times it is better to invest in the big and large companies. You might want to adjust your universe filter based on some metric that allows allocation to small versus big.

Great additions Joakim thank you.

I have found that "composite" methods to measure factors such as value and quality tend to work better. This is consistent with the research I have read. One that immediately comes to mind is Jim O'Shaughnessy's book "What Works on Wall Street", where he shows that a composite value factor outperforms each individual value factor. I have found the same with quality.

To be honest I was surprised by how well this tests out given the simple nature of the original algo. I have also done a lot of robustness testing with this algo, changing the trend following filter, the momentum look back, the days skipped, etc and it hold up well.

Interested to see what other come up with as far as improvements.

Chris

I dont know if it is intentional but the algo frequently uses higher leverage. I tried this and the average leverage over same period is 1.19. Is there anyway to restrict it in range 1.00-1.02?

@Guy, thank you very much for presenting us with screenshots instead of code of what you have managed to do with another's IP that they very kindly shared on the forums for us all to work on. Sure was useful...

I have to agree with Jamie here. If you are going to modify the strategy please be transparent about what you did and provide the source code in the spirit of collaboration.

Thanks,
Chris

Chris (Cain) I have done a great deal of work on these type of strategies and I tjink it essential to test using different rebalance dates. EG first of month, 13th, 21st....whatever. I found that huge and rather disturbing differences could result which made me feel uncomfortable with the robustness of the strategy. The effect was particularly noticeable where the filters resulted in small numbers of stocks in the portfolio. Nonetheless I will clone your code (for which many thanks) and look more closely with interest.

Incidentally it is good to see some mofe ideas coming through which do not follow the stifling criteria for the Quantopian competitions. It makes for a much more interesting forum. I was getting very fes up with the "neutral everything" approach.

Here's an update of my modified version of your strategy. Not sure it's much of an improvement, but posting nonetheless.

The main change is that this one is using SPY MA50 > MA200 as the bull/bear market trend check, rather than trailing positive 6mo returns of SPY. Either way seem to work quite well.

I also added 3yr high and 'stable_roic' and high and 'stable_margins' ranks, but these are commented out as they seem to bring down performance , possibly due to making the model too complex. Or maybe I've made a mistake with them?

One that immediately comes to mind is Jim O'Shaughnessy's book "What
Works on Wall Street", where he shows that a composite value factor
outperforms each individual value factor. I have found the same with
quality.

^Indeed! I keep hoping they will release a 5th edition, with updates of how their value composites have performed since the last edition. Value factors have struggled in recent years I believe.

FYI, I won't be sharing any more updates unless others start to contribute as well.

Thanks @Guy, will you contribute any of your spectacular secret sauce here? :)

@All, looks like I did make a mistake in my CustomFactor. I believe this is the correct way of doing it:

class Mean_Over_STD(CustomFactor):  
    window_length = 756  
    def compute(self, today, assets, out, value):  
            out[:] =  np.nanmean(value[0:-1]) / np.nanstd(value[0:-1])  

Let me know if I still got it wrong.

I didn't make any changes to the strategy, but in the spirit of collaborating to improve on the algorithm, I tried to clean up the style and efficiency of the code a bit. Some of the changes include:
- Changed the custom factor definition to use the axis argument in the np.nanmean and np.nanstd functions.
- Moved the pipeline_output into the scheduled function instead of before_trading_start. It used to be best practice to call pipeline_output in before_trading_start, but last year, we made a change such that pipelines are computed in their own special event and calling pipeline_output just reads the output, so you no longer need to put it in before_trading_start.
- Condensed some of the code in trade.
- Cleaned up some of the spacing and indentation to match common Python style guides.

Again, nothing material, and I don't think it perfectly follows Python style conventions, but hopefully others can learn from some of the changes!

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I tried few different known quality factors. Return on assets makes a tiny improvement in alpha and drawdown.

This version reduces risk. The max drawdown is -10% , beta is way lower, and the Sharpe ratio is higher

If you only care about returns this version is for you

there is a flawed logic in algo above. it takes 3 times or more leverage. In fact, there are leverage spikes even in earlier version. Could someone please fix the earlier version so the leverage is not more than 1.

Thanks @Jamie for fixing the CustomFactor. It seems to be working as I had intended now, and I've included 'stable_roic' in the ranking composite in the attached update.

Other changes I made:

  • Changed the trading universe from Q500US to Q1500US, effectively a proxy for S&P1500 (S&P 500 LargeCaps + S&P 400 MidCaps + S&P 600 SmallCaps).
  • Excluded stocks in the Financial Services sector from the universe, since 'Quality' for financial companies tend to be measured differently from stocks in other sectors, e.g. due to their larger balance sheets.

I also kept latest ROIC rather than using latest ROA (to me, ROIC makes more intuitive sense, but I could be wrong).

Leverage is somewhat controlled in this one, but if anyone could help bringing it down to be consistently closer to 1.0 (without using the Q Optimizer), I think that would be a great contribution. Might require a daily check of leverage --> rebalance?

@ Indigo Monkey , perhaps this was your intention, but all you are doing here is messing with the leverage (for the stock positions anyway)

The risk reduced version is just running the strategy at 0.5 leverage (again just for the equity positions).

The version with huge returns runs the strategy at elevated leverage.

In the code, "context.Target_securities_to_buy" and "context.top_n_relative_momentum_to_buy" need to be the same to keep the leverage around 1.

These two variables control the amount we are buying (context.Target_securities_to_buy) and our final momentum sort (context.top_n_relative_momentum_to_buy).

For the reduced risk version, this is hard to tell since we are putting unused cash into bonds. It will show leverage around 1, but that is half bonds (how you have it coded)

Chris

@Joakim thank you great contributions as always

I know Joel Greenblatt and others have also taken out Financials as well, as they have a much different capital structure, making some value and quality metrics not analogous across sectors.

As for the leverage, I think one thing that we can do is change the rebalance logic.

As currently coded, if we are holding a stock for multiple months, we don't rebalance it back to the target allocation. My thought here was to let our winners run, and not make the position size smaller just because it had good performance.

If we change this logic to rebalance each position in the portfolio back to target weights every month that will go a lot way I believe.

Chris

I deleted earlier post as there was no improvement. I thought I will just backtest with Joakim trend filter and shorter value factor. Improved returns.

Thanks @Chris,

As currently coded, if we are holding a stock for multiple months, we
don't rebalance it back to the target allocation. My thought here was
to let our winners run, and not make the position size smaller just
because it had good performance.

^This makes a lot of sense to me. Why penalize your winners? As you said, [cut your losses and] let your winners run! Or to paraphrase the Oracle: "The best holding period for a great [quality] company is forever." :)

@Nadeem, thanks for your contribution! I wonder how your way of defining value is different from Morningstar's 'cash_return', which is also FCF / EV:

cash_return
Refers to the ratio of free cash flow to enterprise value. Morningstar calculates the ratio by using the underlying data reported in the company filings or reports: FCF /Enterprise Value.

Here's another slight 'improved' version (during this backtest period at least). Only change is that I changed 'stable_roic' to be over 5 years instead of just 3. This won't really fully start to kick in until 2007 as there's no data on Q from before 2002.

Any kind soul out there want to help me set this up in IB's paper trading environment, using Quandl price and fundamental data (if available)? Would iBridgePy be the way to go? Or something else?

@Joakim - I read somewhere that morningstar cash_return gives ttm fcf. While I use latest fcf which is for latest quarter. My thinking is to use latest data (perhaps market has short memory). Just my opinion.

Fair enough, I didn't know that. Makes sense, thanks Nadeem.

@Joakim. I have a question. Maybe I m bit confused here. You are using (ascending=True) in debt to equity. This ascending order is default. Which means you are using high roic, high cash return, high total yield and high debt to equity. Isnt the original thought was to use lowest debt to equity? therefore, the parameter ascending should be set as False? Please help.. maybe I m confused about the logic here.

@Nadeem, good catch! Yes, my thought was indeed that low debt to equity companies were high quality companies, so I should have set ascending order to False. That doesn't work nearly as well obviously, but rather than keep this one as is, I would remove it and possibly replace it with some other quality factor.

@Chris Cain,

The description of the strategy you presented fits my personal investment goal.
Thanks for sharing.
I backtested your original algorithm with line 9 commented. Why cheat myself.
Results metric is good.
When I tried to use order_optimal_portfolio() results got worse.
I checked some positions (Backtest -> Activity -> Positions) and have some questions about your ordering engine:
if TF_filter==False all positions in top_n_by_momentum should be sold or only part of them?
I have seen the number of stock position slowly changing from 20 to 0 during several months in market downtrend.
Why at initial capital 100000
2003-03-31 there was negative cash 68000 that is leverage 1.68
2007-07-31 there was negative cash 50000 ...
In one of Joakim Arvidsson long-only strategy I have seen negative position in bond (-80%) together with 20 stock positions?
May be we need to fix engine first before we start send long-only strategy to the sky?

First of all thank you to Chis Cain. It's good of you to share your algo. And it's a very tempting proposition, although it probably needs a little more investigation. Now I am back at my computer I have been running a number of tests with different re-balancing dates, and, as I have always found with this type of algorithm, the differences in performance are worrying.

I have been using the tidied up code kindly provided by Jamie McCorriston un-amended save for the monthly re balance date.

Perhaps using the maximum of 22 days offset from the month end is foolish - I imagine there are months where there are less than this number of trading days. Nonetheless the results were interesting.

Here are some of the total returns I got by varying the re-balance date:

1574%
455%
1674%
1825%
1477%

Leverage needs looking at - it reaches 2 on occasion with a corresponding net dollar exposure of 200%. Uncomfortable for the lily livered such as myself.

Effectively,as I have always felt with these type of strategies, one would be best off hedging one's bets and split the portfolio into a few different parts, each part using a different re-balance date.

Hey @Zenothestoic. Yes I think date_rules.month_end(days_offset=22) doesn't make much sense. What happens in the months that have a holiday and less than 22 business days?

Thanks to Chris for posting this algorithm. It would be a good candidate to trade in one's IRA or another account that is restricted to long-only. The issue of leverage over 1.00 will have to be solved before I would actually trade this. The code that I suspect is causing excess leverage is the "GETTING IN" section. There doesn't seem to be any consideration of the current positions. It will add to the complexity, but the bonds/equities should be rebalanced together.

@Zenothestoic I share your concern with different starting dates, and I have put together a list of the results starting at different times during 2003. Although the total returns vary by 500% this is due to compounding, and once you adjust for time differences the CAGR is very similar. I am more concerned with the starting year.

Peter
Yes, it would be interesting to see what would have happened in the tech crash. But good point on CAGR (and of course DD) being very close.
Mind you the system rode through 2008 very well, but of course each crash is different.

It is likely for instance that a severe drawdown would have occurred in 1987 - no trend following system could have reacted with the swiftness required at that date, certainly not six month MOM or a 50/250 MA crossover.

But it looks very tempting otherwise if a few kinks can be ironed out.

Chris yes - offset = 22 does not make much sense probably. But I always find it difficult to drill down and find out why on these online back testers. Its the sort of stuff I need to have my own data for with which I can fiddle to my heart's content.

Also I need to check to see whether Q's standard com and slippage is included.

Peter
Did you have to run those tests one by one or does Q let you automate that sort of testing now?

@Chris Cain,

The description of the strategy you presented fits my personal investment goal.
Thanks for sharing.
I backtested your original algorithm with line 9 commented. Why cheat myself.
Results metric is good.
When I tried to use order_optimal_portfolio() results got worse.
I checked some positions (Backtest -> Activity -> Positions) and have some questions about your ordering engine:
if TF_filter==False all positions in top_n_by_momentum should be sold or only part of them?
I have seen the number of stock position slowly changing from 20 to 0 during several months in market downtrend.
Why at initial capital 100000
2003-03-31 there was negative cash 68000 that is leverage 1.68
2007-07-31 there was negative cash 50000 ...
In one of Joakim Arvidsson long-only strategy I have seen negative position in bond (-80%) together with 20 stock positions?
May be we need to fix engine first before Guy Fleury start send long-only strategy to the sky?

@Peter Harrington,

Are the backtest results you posted from original (Chris Cane) algo or from others?
They have different Trend Filters and Factors.
Original (Chris Cane) algo with date_rules.month_end() has Total Returns 1521.59 %.

@Guy Fleury: Multiple participants in this thread have expressed frustration with the sharing of screenshots instead of attaching a backtest. Please refrain from sharing screenshots built on top of the shared work in this thread. You are entitled to keep your work private, so if you don't want to share, that's fine. But please don't share screenshots in this thread as it seems the intent of the thread is to collaborate on improving the algorithm.

@Jamie, understood. I have erased all my posts in this thread since my notes without screenshots become simple opinions without corroborating evidence.

(Added)

@Jamie, as you said: I have no obligation to share anything. I thought it was a forum where anything innovative or reasonably pertaining to the subject at hand would have been more welcomed in whatever form it was presented. My bad.

For those few that might be interested, this thing can exceed 30,000%. But, that is now just an opinion. Can't show a screenshot to corroborate or the program itself. It is nonetheless a 40% CAGR over the 16+ years giving a total profit of some $2.3 billion.

Of note, Jim Simons (Medallion Fund) has managed a 39% CAGR after fees for years. It required a 66% CAGR to make that happen. The fees were 5/44, a little bit more than the usual hedge fund 2/20. In case some are looking for objectives.

For me, this strategy is still not enough even though it could be pushed higher. I have other strategies that can go further without depending on what I consider an internal procedural bug. But, a program bug, if it is consistent, dependable and profitable, then it could come to be considered as some added “feature”.

No strategy change here -- just stylistic change (working off Jamie McCorriston's version). Moved the selection logic out of the rebalance function and into pipeline via a progressive mask, thinking it might be faster and that some might be more accustomed to doing the filtering via pipeline masks.

Quality Companies in an Uptrend (original by Chris Cane) Long-Short Count and TF Check with line 9 commented.
You may see the number of stock position slowly changing from 20 to 0 during several months in market downtrend.

@Chris Cane,
Is it by design?

Quality Companies in an Uptrend (original by Chris Cane) Leverage.

The best ways to fix the problem :

Use order_optimal_portfolio()
Change execution time.
Use @Peter Harrington recommendation the bonds/equities should be rebalanced together.

@Vladimir, Yes this is by design.

Here is the logic:

If the trend following filter is not passed (6-month momentum is negative, 50SMA<200SMA, whatever) then we sell stocks that fall out of our final buy list (in the orginal algo, that was stocks with best ROE then best momentum).

Since the TF filter is not passed, those stocks are not replaced.

If the TF filter is not passed and a stock remains in our final buy list, it is held.

The design is to scale out of positions if the market is trend down instead of get out of all of them at once. This is evident in the graphs you posted.

Thanks for the great question,
Chris

Here is a version that mostly fixes the leverage problem.

The starting point is the modified code posted by Viridian Hawk.

There is a problem in 2006-2007 time frame where a security BR is purchased but then is not able to be sold for many months.
Eventually the sell order does fill a year later at exactly 4 pm (I wonder if the system forced the sale?)
This led to a problem with bonds going short, I think because the max number of stock positions was exceeded.

Anyway I made the following changes:

Changed the bond trading logic so that the allocation would not go negative.

Changed the stock trading logic so that it re-balances winning positions that carry forward to the next month. Maybe better would be to let the winners run and reduce the size of new positions constrained by available cash, but it's a bit more complicated to implement.

Changed the stock trading logic so that it re-balances high quality-momentum stock positions that are held when the trend is negative. I think this helps to balance the bond/stock allocation to reduce leverage during these times.

I was NOT able to find a way to avoid buying BR, so there still is slightly elevated leverage during the 2006-2007 time frame.

@steve

I tried your algo - It is now always holding one extra position. Try recording number of positions and you will see that they are 21 instead of 20. Not sure what is going on. Perhaps it not selling Bond during uptrend and keeping it in portfolio. This might be a factor which is reducing returns. Not sure though.

The 2.5 mo/1yr crossover filter on SPY is the weakest point in this strategy. It saves the strategy during 2008. So it's basically a switch designed to in hindsight save the strategy during one historical market catastrophe. Who knows if it will work in the future -- we don't have enough data points to draw any statistically meaningful conclusions on that signal and how it correlates to "quality." So, that bit of the code is likely an overfit.

@Steve Jost,

To avoid buying BR you may try this code

from quantopian.pipeline.filters import Q500US, StaticAssets

universe = Q500US() & ~StaticAssets(symbols('BR'))

I think this will not solve the problem completely.

@Nadeem,

You are right, in Steve algo IEF exist all the time at least at the beginning.

"So it's basically a switch designed to in hindsight save the strategy during one historical market catastrophe."

Then cut it out. The strategy is still far from shabby without it and you have the comfort that it is no longer curve fit. The drawdown has increased from 20 to 40% in the attached test. Still way lower than the S&P DD in 2008.

@Jamie McCorriston,

Can you advise on how to make this exclusion filter work?

universe = Q500US() &~StaticAssets(symbols('BR','PD'))  

2006-04-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-05-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-06-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-07-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-08-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-09-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-10-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-11-21 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2006-12-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-01-23 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-02-20 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.
2007-03-22 12:30 WARN Cannot place order for PD, as it has de-listed. Any existing positions for this asset will be liquidated on 2007-03-22 00:00:00+00:00.
2007-03-22 13:00 WARN Your order for -111 shares of BR failed to fill by the end of day and was canceled.

@Vladimir, What are the mods? Will you post the algo?

@Vladimir, @Jamie, looks like those stocks might have been halted trading or delisted around that time?

@Vladimir, be ready to add to the list as you increase the number of stocks to be treated.

universe = Q1500US() & ~StaticAssets(symbols('CE', 'CFBX', 'DL', 'GPT', 'INVN', 'WLP', 'ADVP', 'IGEN', 'MME', 'MWI'))  

Why are you guys introducing lookahead bias by filtering specific stocks from the universe? Stocks get halted and delisted all the time -- it's just part of trading. If you have found evidence of data errors, perhaps best to just report them to Quantopian so they can fix them. Otherwise, I think it's best to make a strategy's logic robust enough that it doesn't trip up when positions get halted or delisted.

@Viridian, I these cases, stocks are delisted, halted or have gone bankrupt, but their positions stay open meaning that your bet is still on the table and might not be accounted for in the final result. Ignoring them should liberate those bets. A quick and dirty method, I agree. But in development, it becomes acceptable since your interest is at a much higher level than solving trivia.

As I have said before, you can push this strategy beyond 30,000% total return. You will be able to do so by putting more stocks at play and improving on the strategy design.

Leaving in those delisted stocks will require added code to track them yourself and somehow get rid of them as you go (in order to be more realistic). But then again, Quantopian could make it that their program takes care of it by automatically closing those positions as they appear. But, they will have to distinguish between halted stocks and permanently delisted.

(ADDED)

@Viridian, even if you put an exclude list, some come back anyway. Go figure.

@Guy

Guy I can understand that you have chosen not to share code on this forum but I am intrigued by the idea of a 30,000% return since 2003 Would you consider telling us exactly how it is achieved? I am assuming you use no leverage?

Can you also tell us the max DD and volatility on the system extended to these lofty levels?

I would love to invest for that sort of return, but lack those sort of skills.

@Chris Cane,

it looks like I was able to take the leverage in your algorithm to an acceptable level using order_optimal_portfolio().
I hope you will comment on the consistency of the backtest results with the design before I post the code snippet.

Long-Short Count, and TF Check.

@Vladimir: The symbols method assumes you are asking for the asset whose ticker symbols are BR and PD as of today. So the ~StaticAssets(symbols('BR', 'PD')) filter is excluding two stocks that picked up the tickers BR and PD in 2007 and 2019, respectively. You can specify a reference date with set_symbol_lookup_date or use sid to specify the assets that delisted in 2006.

That said, I agree with Viridian Hawk. When an asset gets delisted, the backtester automatically closes out any shares held in that asset, 3 days after the delist date, at the last known price of the asset. It's probably best to let the backtester handle that position so as to avoid lookahead bias as much as possible.

@Jamie McCorriston

set_symbol_lookup_date('2006-01-01') worked but it costs 30% of profit.

Thank you.

Properties of This Trading Strategy

I got interested in the above strategy, first for its longevity (16+ years) and second, for its built-in alpha.

My first steps are always to see the limitations and then see if I can improve the thing or not. The initial strategy used \(\$\)100k and 20 stocks putting its initial bet size at \(\$\)5k.

A portfolio will have to live by the following payoff matrix equation: $$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H} \cdot \Delta \mathbf{P}) = F_0 \cdot (1 +g(t) - exp_t(e))^t$$The total return on the original scenario was 1551.58\(\%\) giving a total return of \(\$\)1,521,580, it surely demonstrated that even with 16+ years it did not get that far. Nonetheless, it is in CAGR terms, a 17.64\(\%\) compounded rate over the period. It starts to be interesting since it does outperform the majority of its peers. See the majority at a 10.00\(\%\) CAGR or less. Therefore, we could say there is approximately a 7.6\(\%\) alpha in the initial design.

The structure of the program can allow more. First, the design is scalable. It was my first acid test. I upped the initial stake to \(\$\)10M. But this makes the bet size jump to \(\$\)500,000 per initial bet. Due to the structure of the scheduled rebalance, these bets would catch most of their returns from common return (about 70\(\%\)) and not from specific returns. But it did generate alpha over and above its benchmark (SPY). And that was the point of interest.

I raised the number of stocks to be treated in order to reduce the bet size knowing that doing so would reduce the average portfolio CAGR. The reason is simple, the stocks were ranked by expected performance levels, and the more you took in the more the lower-ranked stocks with their lower expected CAGR would tend to lower the overall average. This could be compensated elsewhere and could even help produce higher returns.

There is a slight idiosyncracy in the original program which made it have a 1.04 average gross leverage. Its cost would have been about \(0.04 \times 0.04 = 0.0016 \) should we consider IB leveraging fee for instance. A negligible effect on the 17.64\(\%\) CAGR.

The Basic Equation

The equation illustrated above is all you can play with. However, when you break it down into its components, the only thing that matters in order to raise the overall CAGR of about any stock trading strategy is \(\mathbf{H}\), the behavior of the trading strategy itself. It is the how you will handle and manage the ongoing inventory over time.

The price matrix \(\mathbf{P}\) is the same for everyone. In this case, the original stock universe was Q500US. To get a better selection, I jumped to Q1500US since my intention was to raise the number of stocks to 100 and over. The \(\Delta \mathbf{P}\) is simply the price variation from period to period, and therefore, is also the same for everyone. The differences will come from the holding matrix \(\mathbf{H}\) which is the game at play. If the inventory is at zero, there is no money coming in nor is there any money going out. To win, you have to play, and that is where you also risk to lose.

The first chart I presented had an overall 2,405.92\(\%\) total return on a \(\$\)10M initial stake with 40 stocks. That resulted in overall profits of \(\$\)240M over the 16+ years. Already over 100 times the original trading script. Most of it coming from the 100 times the initial capital demonstrating the program's scalability.

By accepting a marginal increase in volatility and drawdown, I raised the bar to 3,803.87\(\%\) total return which is a 24.26\(\%\) CAGR equivalent for the period.

@Joakim's Version of The Program

I next switched to Joakim's version of the program because it accentuated an idiosyncracy of the original program and pushed on involuntary leveraging. But I did not see it as a detriment. The more I studied the impact the more I started to appreciate this "feature" even though it was not intended. If a program anomaly can become persistent, dependable, and can generate money, it might stop to be considered a "potential bug" and be view as an "added feature".

Using Joakim's program version as base, I push on some of the buttons, increased the strategy's stock count again, changed the trading dates and timing, make the strategy more responsive to market swings and tried to capture more trades and a higher average net profit per trade. The impact was to raise the overall total return to 10,126.6\(\%\). On the same \(\$\)10M this translated to a 31.74\(\%\) CAGR with total profits in excess of \(\$\)1B. It is a far cry from the original strategy.

I kept on improving the design by adding new features and more stocks to be traded with result that the total return jumped to 13,138.85\(\%\) which is a 33.8\(\%\) CAGR over the 16+ years. To achieve those results, I also put the "Financials" back in play since there was no way of knowing in 2003 that the financial crisis would be unfolding and be as bad as it was.

But, you could do even more by accepting a little bit more of leverage as long as the strategy would be able to pay for it all, and remain consistent in its general behavior. Thereby exploiting the anomaly found in Joakim's and the original strategy. Here you could really push and not by pushing that much either. A leverage of 1.4 was sufficient to bring the total return to 32,143.38\(\%\) with a total profit of \(\$\)3.2B and a CAGR of 41.1\(\%\). Quantopian once said they were ready to leverage some strategies up to 6 times. So, 1.4 might look as not that high especially if the trading strategy can afford it.

You could do even more by accepting a leverage of 1.5, raising the total return to 50,921.98\(\%\) with a CAGR equivalent of 45.0\(\%\). In total profit that would be \(\$\)5.09B.

At 1.5 leverage, you would be charge on the 0.5 excess, and at IB's rate it would give: \(0.5 \times 0.04 = 0.02\). Thereby reducing the 45.0\(\%\) CAGR to 43.0\(\%\). Still costing some \(\$\)1.056B over the period and leaving some \(\$\)4.045B as net total profit in the account.

A prior version to the one above tried \(\$\)20M as initial stake and achieved a 43,795.04\(\%\) total return. It could have been jacked up higher, but it was not my main interest at the time. Nonetheless, in CAGR terms that was 43.79\(\%\) and in total profits \(\$\)8.76B.

I think the strategy could be improved even further, but I have not tried. My next steps would be to scale it down now that I know how far it can go and install better protective measures which would tend to increase overall performance while reducing drawdowns.

As part of my acid tests, I want to know how far a trading strategy can go. Therefore, I push on the strategy's pressure points in the first equation knowing that the inventory management procedures are where all the efforts should be concentrated. Once you know what your trading strategy can do, it is all easy to scale it down to whatever level you feel more comfortable, that it be in using lower leverage, reducing overall CAGR, or installing more downside protection. It becomes a matter of choice.

But once you have pushed the limits of your trading strategy, you at least know that those limits could be reached and even if you scale down a bit, you also know that your trading strategy could scale up it if you desired to. It would not come as a surprise, if at all, you would have planned for higher performance and you would know how you could deliver if need be.

It is all so simple, it is all in the first equation above. It is how you inject new equations to the mix that you can transform your trading strategy. In this case, the above equation was changed to:$$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H}\cdot (1+\hat g(t)) \cdot \Delta \mathbf{P}) = F_0 \cdot (1 +g(t) - exp_t(e))^t$$where \(\hat g(t)\) is partly the result of a collection of functions of your own design.

I would usually have shown screenshots as corroborating evidence of the numbers presented above. But, it appears that such charts are not desired in this forum.

To me, it transforms all the numbers above as claims, unsubstantiated claims at best since no kind of evidence is presented to support them. They become like just opinions. Nonetheless, I do have those screenshots on my machine, but they will stay private for the moment.

Of note, the explanations for these equations which can be considered innovative for what they can do, even if they have been around for quite a while, can be seen all over my website.

Changed a single number in the last program that generated the 50,921.98\(\%\) with a CAGR equivalent of 45.0\(\%\). It resulted in a total return of 76,849.31\(\%\). A 48.8\(\%\) CAGR. In total profits: \(\$\)7.68B. I will still need to deduct the leveraging fees which will exceed \(\$\)1.B when compared to the previous scenario.

For a single digit, it increased the total outcome from \(\$\)5.09B to \(\$\)7.68B. Now that is a digit that is worthwhile... Sorry, no screenshot to display as some kind of evidence that those numbers were actually reached. But, they still happened.

Guy

In broad terms the above tells us that you increased the portfolio size from 500 to 1500 stocks and that leverage went to 1.4. But little else.

I can understand that the cost of leverage would not be high at current interest rates and that indeed such level of leverage is modest. As to Q's 6x leverage that of course was to be used on their " zero everything" strategy. I have never understood Quantopian's approach to be honest and I refuse to believe that such neutrality would hold under all conditions. I suspect it would get its comeuppance at some stage as do most strategies. But what do I know, to be honest.

The real problem people have with your posts is not the screenshots themselves but the lack of detail they contain as to how the results were achieved. And your above post does the same (to some extent!)

I now understand the increased portfolio size and the leverage- for which many thanks. But of course most of the detail is still hidden. And it is the detail people would like you to share. Which I also would like you to share. If you would be willing of course.

You mention a starting capital of $20m but I'm not sure that a huge starting capital is so relevant for stocks. With futures I can readily understand it. The contract sizes are enormous for the humble retail investor such as you or I.

But with stocks, even 1500 of them (whittled down to 100, or 50 or whatever) its a different matter surely. Stocks are not all the price of Berkshire Hathaway and the lot size is not huge, so surely you could trade small capital up to dizzy levels?

If you would like to share more details I would be happy to put up some capital to try and shoot the lights out if I can make sense of it. Perhaps you might prefer to discuss this in private.

Great post. @Chris Cain . Thank you for sharing your original algo! Also, kudos to everyone pitching in with comments and improvements. I've never felt quant trading to be a zero-sum game. Publishing, peer review, and building on previous ideas has worked in the sciences and serves as a model for moving from 'quant arts' to 'quant science'. A rising tide can raise all ships.

Anyway, in that spirit, here is a version of the original algo with several changes. The goal was 1) to separate the data from the logic 2) separate the security selection logic from order execution, and 3) use order_optimal_portfolio. This was in an effort to make modifications easier, add flexibility, and allow for using the various optimize constraints.

While the logic is faithful to the original, the execution differs a bit. The positions are the same however, the quantity of shares purchased vary by a few at times, which seems to account for the performance numbers not matching exactly.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Guy

I took a look at your paper here: https://alphapowertrading.com/papers/AlphaPowerImplementation.pdf

To we who are not privy to the underlying methods, your formulas provide no real information. You mention trend following, re-investment of profits and covered calls.

And your formulas state that compounding your strategies will lead to outsized profits.

But the problem people here find with your posts is that we all know the benefits of compounding. What we do not know is how you achieve that compounding.

Much of your website repeats this basic message. But nowhere do you state how you achieve that compounding. Except to mention "boosters and enhancers" which are never explicitly explained.

I think you would achieve much kudos here by providing a precise declaration of exactly what these boosters and enhancers are. And if you would be willing to provide the code here then that would be a great step forward.

Thanks and regards

@Dan - Thank You very much for posting the familiar version of the code.

I have a question though - in the record pane in your algo - it seems like the leverage is always at 1.05. Please check Stock weight + Bond weight. It seems like the bond of 0.05 is always in portfolio. Is it intentional? If not, how we can make the leverage at 1.00?

Thank You in advance for your help.

@ Nadeem - You could make the following change to reduce the leverage from 1.04 to 1.00.
Comment out the first line and replace it with the second line.

bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)
bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)

@Nadeem Yes, @Steve Jost's code above will reduce the 'target leverage' to 1.0. It will let the bond weight go to zero. In the original algo, the bond weight was always a minimum of .05 (specifically 1.0 / context.TARGET_SECURITIES) and the leverage would go to 1.05. This was a 'feature' of the original algo so I left it in. It actually helps the sharpe ratio and returns.

Even with this change the leverage spikes to 1.05 in 2006. This is because the weights are set assuming all the orders fill. During that time, the algo tries to sell BR but cannot. The algo essentially 'over buys' and the leverage goes above 1.0. The way to keep that from happening is to place all the sell orders. Cancel any open orders after a set time. Then place buys equal to the amount of cash left. Basically don't buy until all the sell orders fill or are canceled.

Good catch.

@Dan,

You probably took as template for trading logic not the original @Chris Cane algo but somebody else together with its bugs which create
Max 21-23 positions, leverage 1.05 - 1.1... (see attached notebook)

Not selling BR during 11 month in 2006-2007 is more likely engine problem.

What is this for?

set_slippage(slippage.FixedSlippage(spread = 0.0))  

Try to do the same with the original @Chris Cane algo trading logic .

I solved more or less everything except BR in 11 lines trade ().

PS. I will attach backtest when Quantopian let me to do that.

Here are the results :

@vladimir

these are some crazy coding skills. you have shrinked the original code from 90+ lines to 38 lines. & yet improved the performance. Awesome work. Great to have you here and learn from you.

I have one question for you. In the algo posted by Joakim 4 days ago with 2811% return. What is the purpose of mask=universe in the mean_over_std factor? Isn't there will be a mismatch if we rank other factor without mask and rank mean_over_std with mask? and why does even it matter if we have mask in pipeline screen.

I am asking because if we remove mask=universe from line 90, the result are hugely different. Please see the attached backtest. the result with mask are somewhat similar to what Joakim had. (even though little different because of leverage fix in attached version).

@Nadeem

rank () -> security factor rating among all traded securities in the Quantopian database

rank(mask=Q500US()) -> rank security factor among Q500US()

Example:

In my code I have changed only ranking in make_pipeline(), added mask = m to all fundamentals factors,

And got some improvements in metrics.

Thank You @valdimir. I am still confuse as to why it should matter when we already have mask in pipeline. we should still end up with those stocks in pipeline which are in Q500US because we have mask in pipeline. Cant get me head around it. Could you please help me understand it?

Dan, very well said "A rising tide can raise all ships". I still remember one of your earlier quotes "Communities are built from collaboration, not competition".

It's an interesting strategy. However, it did not outperform the SP500 in the last 2 years. Do you guys think, that the alpha is gone?

@Valdimir - I agree with @Nadeem - mad coding skills. Glad you found a workaround to not being able to post backtests.

@Nadeem - Thanks for pursuing the issue of mask in pipeline with @Valdimir.
Using rank(mask=universe or sub-universe) turns out to be very important, otherwise ranks from the larger universe can skew the factor weightings.

Very interesting! Thanks for posting. For measuring "quality," it would be good to see how adding (i) positive insider buying activity and (ii) positive analyst ratings affect the results.

Do you guys think, that the alpha is gone?

Yes. Significant drop-off in alpha from 2014 onwards. I would think these factors have been discovered and arbitraged out of the market.

"Yes. Significant drop-off in alpha from 2014 onwards. I would think these factors have been discovered and arbitraged out of the market."

Rather reminds me of Keats Ode to Melancholy and the concerns of the romantic poets regarding the temporary nature of our world. The fleetingness of life, the impermanence of the flower and all else in our temporal world.

At a time when many major hedge funds are struggling or closing their shutters, we may do well to dwell on impermanence.

Moved make_pipeline() to initialize (), removed unnecessary masks, changed the end of the month to 7, and made some cosmetic changes.
Got some more improvements in metrics.

The attached notebook is based on Vladimir's program version which used the optimizer for trade execution (order_optimal_portfolio).

It is hard to "force" the optimizer in the direction you want. It is a "black-box" with a mindset of its own. Nonetheless, by changing the structure and objectives of the program, one can push the strategy to higher levels. Some leverage and shorts have been used to reach that 34.1\(\%\) CAGR. However, the strategy, at that level, can afford the extra leveraging fees.

Evidently, the strategy looked for more volatility and as a consequence suffered a higher max drawdown while keeping a relatively low beta. I have not improved on the protective measures as of yet. Currently, the trend definition is still the moving average crossover thingy which will alleviate the financial crisis drawdown but will also whipsaw a lot more than desired or necessary.

A total return of 14,059\(\%\) will turn a \(\$\)10M initial cap into a \(\$\)1.4B account.

Still more work to do.

@Guy. Not trying to criticize or anything but whatever you did with the strategy, didn't clearly worked. If you look closely, the highest returns were achieved in 2015 and at the end of 2019 it is still at same level. In other words, the strategy didnt make any money after 2015 if you stay invested. Clearly, the alpha has gone from the "objectives of the program".

In contrast, vladimir algo is consistently making money. An ever increasing upward sloping curve.

@Nadeem, let me see. You are playing a money game and the final result is inconsequential. You like a smoother equity curve even if it is \(\$\)1.2B lower over the trading interval. Well, to each his own as they say. And as I have said, there is still work to be done. Especially in the protective measure department.

Vladimir does have a good trading strategy and I do admire his coding skills. They are a lot higher than mine.

Maybe you would prefer the following notebook. Who knows?

@Guy, I'm curious if you've allocated any real capital to any of your strategies, and if so, what the result has been in the live market?

@Guy Fleury

To my mind,

order_optimal_portfolio(opt.TargetWeights(wt), [opt.MaxGrossExposure(LEV)])  

does not produce any weights optimization by any criteria it just exercise the requested weights more accurately and constrain target leverage.

When writing the code of my version of the strategy, I entered the LEV parameter specifically for you.
Here are the results with only two changes in setup LEV = 2.0, initial capital = 10000.

It is not possible to trade this strategy with LEV > 1.0 on IRA accounts.
Results do not include marginal expenses.

@Guy Fleury

Guy, I am sorry to say that from my point of view you continue to make the same mistakes you have always made. I simply do not see the point of your posts if you are not willing to share your code.

Or indeed to elucidate on the mysterious "pressure points" you refer to to produce the equity curves you come up with. I am well aware I have made no contribution to this thread either but I have at least taken the trouble to look through your website trying to find out exactly what your "alpha power" is based on.

Sadly it is more of the same - many formulae, many equity charts, and much obfuscation.

It is of course entirely your prerogative to present your trading systems in this way. But to my way of thinking the exercise is entirely pointless.

I am genuinely interested in your point of view but by refusing to provide details, what could have been an interesting contribution to the obscure and arcane arts you portray is rendered entirely without meaning.

Once again may I respectfully request that you provide the code behind your alterations to this system kindly provided by and improved upon by others.

And I repeat the words "respectfully" and "request" lest we get into the same sort of dispute we have so often fallen into in the past.

@Vladimir
In various posts above @Guy quotes leverage of 1.4 to achieve one of his more impressive equity curves. He also states that he has used the Q1500 portfolio rather than the Q500. I have tried the Q1500 as well and the Q3000 and the difference is underwhelming.

Here is a "pressure point". Use the Q3000 and reduce the number of stocks invested in to 10. Probably very unstable but I have not bothered to run it over different re-balance dates.

Here is another pressure point for you....and hey, I'm going to tell you what it is! Q3000 and invest in 5 stocks. See we can all do it eh? And of course you can make it even better with a little extra secret sauce....like reducing the max DD and vol.....but that is for another day. Don't want to over-excite myself.

If you take a look at the code you will see I have made a total mess of the parameters. But that is the point isn't it? If you just present pretty pictures you have absolutely no idea how they were created. And no interest either.

Hey All,

Thank you so much for the contributions to this Algo. This was what I had in mind when I shared this strategy.

Special thanks to those that made the code more efficient (Jamie, Joakim, Dan, Vladimir)

As you can see, many versions of this idea work well. On a historical basis, we can say this tweak or that tweak worked “better”, but we have no way to knowing which tweak will have the best performance in the future. As such, I look for strategies that are simple, explainable, robust and show good performance on a wide variety of parameter changes. To me, this strategy does that.

I’ll address a few comments I read in this thread.

Some have said that perhaps the Alpha has gone away b/c the last couple years have had a bit lower performance. While this is always a possibility, I certainly don’t think this is the case. First of all, things we are using here (value, quality, momentum, trend following) have been around for decades. All of these have been used well before this backtest even started. Factors go in and out of favor (this is especially true with Value’s bad performance over the last 5 years). To me, that certainly doesn’t mean the Alpha is gone. Those that have had that opinion in the past (such as with Value’s underperformance in the late 1990s) were very mistaken.

I view these factors as rooted in human behavior (too long of an explanation to get into now). I am of the opinion that human behavior will not change.

Some have questioned the validity of the trend following filter. The original algo used a ROC over the last 6 months. Joakim’s versions used the 50 and 200-day moving averages. Both of these techniques essentially do the same thing (though at slightly different speeds). I think both are logical and will provide value in the future.

As far as the validity of this rule in general, the questions becomes - do you believe in trend following (time-series momentum) or not? I certainly do. There are 100+ year backtests that prove its value (see AQR), not to mention decades of real-world practitioner results.

Over the last 9 years, we have only had equity pullbacks in the 10-20% range. In these type of shallow pullbacks, our trend-following regime filter rule will be a drag on performance. What happens is you get out of the market, then have to but back in at a higher price. The question then becomes - do you think these shallow pullbacks are the new normal, or that we will eventually see a 30-50% pullback which has happened many times in history? In a 30-50% pullback, our trend-following regime filter will add a ton of value (such as what happened in 2008).

You can always mess with the different speeds of the lookback for trend following. Academic research has shown that 3-12 month lookbacks work. Instead of trying to pick the optimal lookback, I am in favor of diversifying amongst several lookbacks (this is not implemented in the current algo).

Anyway, those are some of my thoughts. Thank you all for checking out my algo, I am happy with the great response from the wonderful Quantopian community!

Chris Cain, CMT

Employing rotation into a mere 5 stocks monthly is somewhat more dependent on roll date - as i believed would be the case. 4 different roll dates produce CAGRs of between 19 and 29% when you don't employ any leverage. The code still needs to correct the occasional unintended leverage.

But for what it is worth here is one such back test where I have not played fast and loose with the parameters. Pretty impressive.

@Chris, very well put. I 100% agree with your comments.

Anthony,

in your last 3 posts you probably used somebodies broken algo with bugs and added your own.
By Christopher Cain definition this algo is long only with no leverage.
You set context.Target_securities_to_buy = 5.0.
Initial capital $100000.
Jast check positions on the first day of your algo trading 5-19-2003.
It had 20 positions in stocks $20000 each, total $400000 and short position in bond -$315000.
That is leverage more then 7, sometimes it reaches more then 13.

Did the algorithm realized your intentions?
Is it appropriate to use the results of broken algo in argumentation?

I have tested your parameter setting in my algo.

QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('IEF', 'TLT'); D = 8;
MOM = 126; EXCL = 10; N_Q = 50; N = 5; MA_F = 20; MA_S = 200; LEV = 1.0;

The results are not perfect but for somebody may be acceptable.
Another proof of Christopher Cain concept.

Hi Vladimir thanks for the comment.

The first couple of tests were simply to show the futility of posting charts without the code. I deliberately mucked up the parameters.

The third test I used Jamie Corriston's code (I believe?) and none of my own. I simply set the number of stocks to 5 in both relevant lines. Leverage is 1 most of the time with the occasional spike to 2 which I have not investigated.

I am going to look much further if I decide to trade this thing and will now download/clone your version of Jamie's's code as amended.

With many thanks to you.

I was originally attracted to the idea since I had drafted a monthly momentum re-balance system here on the website a couple of years ago. Which attracted much attention until Guy Fleury started commenting and the whole thread then went off the rails.

This system is much better - I had failed to add any fundamentals filter which certainly helps with the variability over different rebalance dates.

Actually Vladimir, I don't think you posted a version of the code? Whose or which version are you using?

Employing rotation into a mere 5 stocks monthly is somewhat more dependent on roll date - as i believed would be the case. 4 different roll dates produce CAGRs of between 19 and 29%

I know you're joking around/making a point, but my recommendation (and I believe Quantopian's guidance as well) is to eliminate day-of-the-month overfit noise by putting a 20-day SMA on the signal and trade every day instead of monthly. Typically this will give your backtest a lot more data points, allowing you to hold more positions at once without diluting your alpha signal (just the hits, no deep cuts, so to speak) nor increasing turnover, and is likely to improve your sharpe ratio (via lower volatility and slippage). Though this algo trades on such low frequency data it's not likely to benefit so much from day-of-the-month diversification, I have successfully used this technique to great results in the past.

Chris, thanks for posting this. Has anyone been able to convert this algorithm to work with alpaca? Or if not can someone point me in the right direction of how to get started (with converting this or any other quantopian algorithm)?

Hey Viridian,

"I know you're joking around/making a point, but my recommendation (and I believe Quantopian's guidance as well) is to eliminate day-of-the-month overfit noise by putting a 20-day SMA on the signal and trade every day instead of monthly."

Can you expand on this? How would the logic work to implement this technique with this Algo?

@Viridian Hawk
Thank you for that. It certainly sounds an excellent idea to average the signal.

@Mike Burke
I'm not sure there would be much point averaging the fundamental factors, unless they are ratios to shareprice, since their frequency is so low. But you could easily average the momentum factor which is the second leg of this algo's filter.

I'm a bit rusty with the Q API, but i believe all one has to do is to add the momentum provisions as a custom factor. Custom since the built in momentum factor does not adjust for not including the last ten mean reverting days.

Then add it to the pipline, find the top x, and use it as a filter as per the existing algo.

When I get around to it I will post an example.

I'm still puzzling over the occasional spike in leverage - or rather how to correct it. I do not want to use optimise since I don't want equal weighting. Therefore you need to allocate slightly differently to the current algo. You need to only allocate a percentage of unused capital...which is not the way it is currently done.

@Mike Burke -- Sometimes it's as easy as putting a SimpleMovingAverage on the pipeline output, but for this algorithm you'd have to refactor the execution aspect of it. Basically you'll want start by creating a dictionary of target weights (including the bond allocation) based on the current day's pipeline output and bull market crossover, but instead of ordering those weights, you'd add them to a context list, ala context.daily_weights.append(today_target_weights) then you'll want to prevent overflow by popping off any entries once you're over 20, ala if len(context.daily_weights) > 20: context.daily_weights.pop(0);Then you just combine the weights and normalize them. That'll give you the 20-day average portfolio, which you can then order (most easily via the optimizer w/TargetWeights(combined_weights)).

@Viridian Hawk.
Yes, the portfolio management aspect is the key. Nice solution and simple. You just add the new stocks at a weight of 1/20th and existing stocks in the portfolio at whatever percentage of equity they have reached and then normalize.

This is what you should be doing even if you do not intend to average the signal or trade daily.

I find it very difficult to analyse the output on Q. Loathe it in fact. But I suppose what is happening is that for most of the time enough big hitting stocks drop out to make room for new entrants. And then sometimes they don't and you get huge leverage because you have failed to normalize the allocations.

At least with your suggestion new entrants get a fair crack of the whip and strongly trending stocks still retain an overweight position.

I was stupidly thinking of just dividing the un-allocated capital amongst the new entrants at a roll date, but I like your solution better.

@Viridian Hawk.
Another thing I have been pondering is the running of this strategy now that you can no longer trade through Quantopian. If you were wiling to take the risk of monthly allocation it would be no effort to run it manually, although whether you would take your signals from Quantopian, buy your own data, or look for online screeners I am not too sure. Perhaps Morningstar offer a free or cheap screener on the fundamentals.

If you wanted to run it daily, automation would be the better option and I suppose you could convert the algo to run on Quantconnect.

I suppose there must be other solutions but the last thing I would want is to have to run the system on my own server.

What do you do?

@Viridian Hawk.

I have become so used to designing my own systems on my own software and it is so very much better to be able to analyse a spreadsheet of your results which contains all the prices, all the signals, all the trades and so forth. You can turn them inside out and upside down and really get to the bottom of why the system is doing what it is doing.

In that respect I find Quantopian so very difficult - I can never grasp the full picture clearly enough.

I suppose logging is one option, although it is restricted. I suppose running in debug is another although so slow and tedious.

I understand the need to protect their data suppliers but for me at least it does make life difficult.

I suppose the research environment may be a better option since I think (?) you can use pipeline there now.

How best do you analyse your systems on Quantopian?

Incidentally, for those who insist on using leverage by design, it is worth considering leveraging the bond portfolio using futures rather than IEF. I did a great deal of work a while ago on the all weather portfolio concept and it might be worth looking at replacing IEF with the relevant future on US Government bonds. I have no idea yet whether Quantopian allows you to mix futures and equities within one system, but by way of example you could allocate 90% of your cash to stocks and 100% equivalent of your cash to bonds. Or whatever.

tenquant.io looks to be a promising source of high quality free fundamental data. In theory it is faster than MorningStar, which can have up to a three-day delay, whereas tenquant.io claims they scrape the financial data as soon as it goes public. As far as automation goes, I've been using Alpaca. I couldn't get their version of zipline to run on my computer, so I just rolled my own barebones trading framework using the REST API, which was pretty simple. So maybe somebody with more experience with Alpaca's version of zipline, which I think is called pylivetrader, can chime in whether there's any incompatibility that would stop this algo from running, but my impression is that it shouldn't be too hard to get it to work.

Thank you, most useful.

I have by no means finished my work on this excellent algo but as an interim report I have made progress reducing leverage without using "optimise". Certain stocks were repeatedly not getting sold at or around the close so I moved the stock transactions to the open. By the time the bond trades happened at the close, in tests so far, all stock sales were getting processed. And hence the allocations more accurate.

To combat negative allocations to bonds, I simply reduced any negative allocations to zero.

Imperfect and doubtless I shall improve on it once I get to the bottom of the matter.

I'm not at all sure about trading every day and averaging the momentum signal. A similar effect could be achieved (so far as avoiding the dangers of using a single monthly re-allocation date) by trading weekly on an un-averaged signal.

My concern is that trading once a month is very convenient, if potentially risky. Trading every day or even every week would be impossible for me unless I automated.

@Zenothestoic

The thing that has always bothered me about the pipeline implementation (starting with Joakim's version I think) is that the ranking is done against all stocks (QTradableStocksUS ?). In my mind the correct way to rank the factors is to use a mask to limit the comparison to those that are in your universe, i.e. rank(mask=universe), in this case Q3000US. However if you do it this way the cumulative return drops to less than half. Can you make an argument (other than it works better) for using rank() and not rank(mask=universe) ?

@Steve Jost
To be honest I am very unfamiliar with the Quantopian API especially as it seems to have changed somewhat since I last visited it.

I find working with the Quantopian IDE about as difficult an experience as engaging in carpentry where you are only allowed to look through a keyhole at your hands and the workbench.

Looks like you are right, how very bizarre. I added a mask separately to each of the fundamental rankings and came up with different (as it happens) worse results. Live and learn eh?

@Zenothestoic

I'm not an expert on Quantopian API, but to understand how rank(mask=universe) differs from rank(), it can help to construct two versions of the pipeline in a notebook (see attached). From this, it seems that rank() is ranking against a larger population than Q3000US and using screen = universe does not change the numerical ranks.

ltd_to_eq = Fundamentals.long_term_debt_equity_ratio.latest.rank(mask=universe,ascending=True)  

Yes, the wrong way round of course if you are seeking low debt to equity. As it stands, however, high debt to equity creates exceptional profits, presumably because the company's debt produces leveraged earnings......

If you are a leverage junkie then this might actually suit your purposes - a highly profitable system (at least for the test period) and no need to take on leverage in your trading account.

A sort of no-recourse borrowing where your account can not go below zero.

@ Steve Jost
No, now you have pointed out the error, I can not put forward any argument for using rank() as opposed to rank(mask=universe).

@ Zenothestoic

It seems the prudent approach is to use rank(mask=universe) even though the return is less.
Without the mask, I worry that the exceptional return may have been a happy accident and not likely to repeat going forward.

Regarding debt to equity ratio - I've also found also that high 'financial_leverage' gives good results.
I think the two metrics are more or less equivalent.

That said it's most likely the combination of large debt and high 'roic' that does the trick.
A company that generates high return on capital will do well to leverage it's capital at low (and historically decreasing) interest rates.

@ Steve Jost
Your logic sounds right. Lots of debt, but used profitably.

@ Steve Jost

For the private punter that sort of algo makes a lot of sense. Leverage is built in, but in such a way it can not bankrupt you. The return is high enough that you can devote a small amount of capital to it and still have it make you a decent amount of money over 5 or 10 years. And your capital employed is small enough that if it all goes horribly wrong it won't be a catastrophe.

I'm glad this sort of algo has made a return here. So much more interesting than what Big Steve wants for his Billions.

@steve

Thank You for posting the notebook. Using mask=universe is something I have been experimenting a lot lately. I am trying to understand why choosing mask results in different returns. It should not. For example - Lets assume a scenario. Lets say you rank against the whole universe of 9000 securities (i.e. not using mask). Now lets pick one security in that universe - X. Lets say X is having highest fcf and hence has a rank of 9000. So it will be on top. Now lets assume you rank against Q500US but the stock X is not in Q500US. According to our screen - it will be excluded. Lets continue this further - say a stock Y is ranked as 8999 in whole universe and it also happens to be in Q500US, so it will end up in our selection. Now lets say had you use mask=ranking then the rank would be 500 and it would still end up in our selection. Therefore, using mask should not differ the results.

But the question is why it happens in the above algo - I think the answer lies in this line --> quality = (roic +ltd_to_eq +value)

adding different rank scores messes up the whole ranking. If one instead use quality = (roic +ltd_to_eq +value).rank(mask=universe) the result will be exactly same. Hence, using mask in ranking does not change the result.

This is the most plausible explanation I can come up with. I might be missing something. Let me know please if this sounds logical.

@ Nadeem
I agree, if it's just one factor it doesn't matter if you rank over the universe or over the entire population.
If you combine several factors (as in this script) than it seems to me that it does matter which way you do it.
You don't want stocks outside of your universe to influence the weighting placed on the factors.

I think Dan Whitnable maybe said it the best in comments that he added to his source code.
Note that he included (mask=universe) for the various factors but commented out that part of the code.
I think this was commented out only so that his back test would match the result of previous versions.

    # Get the fundamentals we are using.  
    # Rank relative to others in the base universe (not entire universe)  
    # Rank allows for convenient way to scale values with different ranges  
    cash_return = ms.cash_return.latest.rank() #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank() #(mask=universe)  
    roic = ms.roic.latest.rank() #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(ascending=True) #, mask=universe)  
    # Create value and quality 'scores'  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  

In the spirit of co-operation I wanted to report that I am having better results by doing away with the skipped period of 10 days in the momentum calculation (which people leave out because stocks are claimed to mean revert over the period). Frankly, I have never been impressed by the argument. Yes, I tend to use 10 days in my mean reversion system tests, but the benefits of NOT leaving out the past 10 days in TF calculations seem sound.

The other change I have made is to use a more sensitive MA crossover of 10 / 100 for the SPY permission filter.

The algo is now reaching Guy Fleury proportions.

For a really ritzy and risky shoot the lights out type system, I'm just using debt to equity as a factor - the higher the better.

Using 5 stocks creates 38% cagr over the 2003 to 2019 test period. Still the occasional leverage spike to iron out -average leverage 1.06. Vol 30, max DD 37%. Universe = Q3000US(). Weekly rebalancing.
I won't bother to post the algo at this stage since I still have to deal with a few matter such as the leverage problem.

But it is certainly all beginning to look highly amusing.

This trading strategy has about the same structure as many others on Quantopian: select some stocks, rank them on some criteria and rebalance periodically. Use some minimal (lagging) protection, i.e.: a 50–200 SMA crossover for this stock to bond switcher. A simple technique that has been around for ages.

Would it have been reasonable in 2003 to do so? Definitely, we were just getting out of the aftermath of the Dot.com bubble. A lot of developers (and portfolio managers) had bad memories of that debacle. So, yes, going forward, they would have put some kind of protection which could have very well been some variant and serving the same purpose: to limit the impact of drawdowns and volatility. It would also appear that it is easier to sell to management an automated trading strategy with some kind of protection than without.

This trading strategy is very hard to play with high stakes and a limited number of stocks. However, for a smaller account, it should do quite fine, as long as it stays limited.

The strategy has built-in scalability by design (up to a limit).

Playing 5 stocks on a \(\$\)10M account is not that reasonable. It starts with a \(\$\)2M bet size. I do not think that many here are ready for that whatever the results of some backtests. However, as you increase the number of stocks you see a decline in overall performance. This is quite reasonable too. The strategy tries to pick stocks that are already performing above market averages, and therefore, should provide on average an above-average performance. The thing is that as you add new stocks, they have a lower-ranked expectancy than those already selected. And this will tend to lower bet size and overall performance.

The question becomes: is this still acceptable over the long term?

It is all a question of confidence. In which scenario would you put your money on the table for some 16+ years? A backtest can give you an indication of what could have been. It does not give you what will be. However, based on the behavior of a particular trading strategy, you can ascertain that going forward would be much like what the trading strategy did in the past. You will not have the exact numbers, but you can still make some reasonable approximations. Sometimes, relatively accurate, considering that, otherwise, you would not even have a clue as to what is coming your way.

It is only Quantopian who is looking to place £10m into a strategy. I will be placing a mere £10k.

So yes, I entirely agree. And yes, as you expand to 20 stocks and beyond the returns decrease but that is what you would have to do as the capital grew.
Trend following is old as the hills. The only difference with this strategy is the accidental discovery of the "wrong" use of the debt to equity ratio.

The big advantage of this strategy is that for a small amount of capital, large gains may be possible for some period without the use of leverage in the trading account.

The leverage is applied by the corporates themselves and is therefore not a direct risk to your trading account ~ it is non-recourse borrowing as regards the trader.

Incidentally, and not surprisingly, you will find returns are also pretty high using 10 and 20 stocks, so for the smaller account as the equity grows, you could employ the greater capital in this way.

All in all, this is a strategy for the small player looking for large returns from a leveraged play without the risks of taking on borrowings on his own balance sheet.

And agreed as to the future. Quo vadis. As in life, so in the markets.

Now, Guy – I have shared an exact strategy with the community. How about you share the code to one of your adaptations of this strategy so that we can see how you achieve your outsize returns?

Alpha Decay Compensation

The following chart is based on Dan Whitnable's version of the program at the top of this thread. All the tests were done using \(\$\)10M as initial capital. The only thing I wanted to demonstrate was that the structure of the program itself will dictate some of its long-term behavior. And as such, one thing we could do was make an estimate as to the number of trades that will be taken based on how many stocks will be traded.

There are 17 consecutive tests presented on that chart. Each test having the number of stocks incremented as the BT number increased. The Q3000US was used instead of the Q500US universe. There was a 2\(\%\) CAGR advantage in doing so. No leverage was used. Nonetheless, the strategy did use some at times (up to 1.6) for short periods of time. This mainly due to the slippage factor. On average the leverage was at 1.0, some 95+\(\%\) of the time.

An analysis of the data can help better understand the overall behavior of the trading strategy and plan for what you would like to see or might prefer as initial setting. I have not made any change to the logic of Whitnable's version except from BT # 24 where I commented out the no slippage line of code. Moved the rebalance function to beginning of month and beginning of day to allow more trades to be executed. On the last test you had 240 stocks, yet a rebalance generated 16,486 transactions in one day. That is about 3 hours for average trade execution. So, yes, there is a lot of slippage.

I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for.

The strategy, as you increase the number of stocks, does generate a lot of slippage and there is a cost to that. I prefer having a picture net of costs. Therefore, the Quantopian default settings for commissions and slippage were in force.

Some observations on the above chart.

It takes about 30+ stocks to have a diversified portfolio. The more stocks you add, the more the portfolio's average price movement will start to resemble the market average indices. It is only if a trading strategy can generate some positive alpha that it can exceed market averages.

As the number of stocks increases, we see the total return increase up to BT # 25 with its 17.8\(\%\) CAGR. After that, we have the total CAGR decrease as the number of stocks increases.

As explained in my previous post, this is expected since as we increase the number of stocks we are adding lower-ranked stocks having lower CAGR expectancies. The result is reducing the overall portfolio performance.

The more you want to make this a diversified portfolio (having it trade more stocks), the more we have a reduction in the overall performance. It is still positive, it is still above market averages and it does generate some positive alpha.

What I find interesting is the actual vs estimated number of trades columns. The actual number of trades come from the tearsheets. Whereas, the estimate, is just that, an estimate based on the behavior of this type of portfolio rebalancing strategy. There is a direct relationship between the number of trades and the number of stocks to be traded as the following chart illustrates.

Participation Prize

A subject that is not discussed very often here. There is a participation prize to play the game. Since the rebalance is on a monthly basis, as the number of stocks grows, the average holding duration will too. Jumping by close to month multiples. Some attribute the gains as coming from their alpha generation when all it is is partly market average background.

The estimated free x_bar is a measure of what the market offers just for holding some positions over some time interval. If you hold SPY for 20 years, you should expect to get SPY's CAGR over that period. The same goes for holding stocks months at the time. And over the past 10 years, in an upmarket, just participating would have generated a profit on the condition you made a reasonable stock selection.

One cannot call the estimated free x_bar as alpha generation. It is a source of profit for sure, but that is not alpha per se. Alpha is what is above the market average. The stuff that exceeds the benchmark average total return. As the number of stock increases, we can see the proportion of the estimated free x_bar increase in percentage over the actual x_bar. It is understandable, the average net profit per trade is decreasing (actual x_bar) while the average duration increases and the turnover rate is decreasing (x_bar is the average net profit per trade, refer to the long equation in a previous post).

Having the total return decrease after BT # 25 can also be interpreted as alpha decay. This is due to the very structure of the program itself. No compensation is applied for this return degradation, and it should continue simply by adding more stocks to the portfolio, or adding more time. And this becomes a rather limiting factor. The more you want to scale this thing up by adding more stocks, more time, the more the alpha with disintegrate. All that is needed is to compensate for the phenomena.

I have this free and old 2014 paper, which is still valid today, that deals with how to compensate for this return decay (see https://alphapowertrading.com/index.php/publications/papers/263-fix-fraction-2). It should help anyone solve that problem, and thereby, achieve higher returns. The solution does not require much. However, the first step is to understand the problem, and then apply the solution. The paper does provide the explanations and equations needed to address the return decay problem.

Thank You, Zenothestoic for your continued work on this. I look forward to seeing your final version.

@Nadeem Ahmed
Thank you for your kind words!

@Guy Fleury
"I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for."

Then just for once Guy, why don't you do so? It would be most interesting. I am sure everyone would enjoy your adaptation of this system.

Here is Quality companies in an uptrend (Dan Whitnabl version with fixed bonds weights) and some other improvements.

# Quality companies in an uptrend (Dan Whitnabl version with bonds weights  fixed by Vladimir)  
import quantopian.algorithm as algo

# import things need to run pipeline  
from quantopian.pipeline import Pipeline

# import any built-in factors and filters being used  
from quantopian.pipeline.filters import Q500US, Q1500US, Q3000US, QTradableStocksUS, StaticAssets  
from quantopian.pipeline.factors import SimpleMovingAverage as SMA  
from quantopian.pipeline.factors import CustomFactor, Returns

# import any needed datasets  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.data.morningstar import Fundamentals as ms

# import optimize for trade execution  
import quantopian.optimize as opt  
# import numpy and pandas because they rock  
import numpy as np  
import pandas as pd


def initialize(context):  
    # Set algo 'constants'...  
    # List of bond ETFs when market is down. Can be more than one.  
    context.BONDS = [symbol('IEF'), symbol('TLT')]

    # Set target number of securities to hold and top ROE qty to filter  
    context.TARGET_SECURITIES = 20  
    context.TOP_ROE_QTY = 50 #First sort by ROE

    # This is for the trend following filter  
    context.SPY = symbol('SPY')  
    context.TF_LOOKBACK = 200  
    context.TF_CURRENT_LOOKBACK = 20

    # This is for the determining momentum  
    context.MOMENTUM_LOOKBACK_DAYS = 126 #Momentum lookback  
    context.MOMENTUM_SKIP_DAYS = 10  
    # Initialize any other variables before being used  
    context.stock_weights = pd.Series()  
    context.bond_weights = pd.Series()

    # Should probably comment out the slippage and using the default  
    # set_slippage(slippage.FixedSlippage(spread = 0.0))  
    # Create and attach pipeline for fetching all data  
    algo.attach_pipeline(make_pipeline(context), 'pipeline')  
    # Schedule functions  
    # Separate the stock selection from the execution for flexibility  
    schedule_function(  
        select_stocks_and_set_weights,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(  
        trade,  
        date_rules.month_end(days_offset = 7),  
        time_rules.market_close(minutes = 30)  
    )  
    schedule_function(record_vars, date_rules.every_day(), time_rules.market_close())  

def make_pipeline(context):  
    universe = Q500US()  
    spy_ma50_slice = SMA(inputs=[USEquityPricing.close],  
                         window_length=context.TF_CURRENT_LOOKBACK)[context.SPY]  
    spy_ma200_slice = SMA(inputs=[USEquityPricing.close],  
                          window_length=context.TF_LOOKBACK)[context.SPY]  
    spy_ma_fast = SMA(inputs=[spy_ma50_slice], window_length=1)  
    spy_ma_slow = SMA(inputs=[spy_ma200_slice], window_length=1)  
    trend_up = spy_ma_fast > spy_ma_slow

    cash_return = ms.cash_return.latest.rank(mask=universe) #(mask=universe)  
    fcf_yield = ms.fcf_yield.latest.rank(mask=universe) #(mask=universe)  
    roic = ms.roic.latest.rank(mask=universe) #(mask=universe)  
    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
    value = (cash_return + fcf_yield).rank() #(mask=universe)  
    quality = roic + ltd_to_eq + value  
    # Create a 'momentum' factor. Could also have been done with a custom factor.  
    returns_overall = Returns(window_length=context.MOMENTUM_LOOKBACK_DAYS+context.MOMENTUM_SKIP_DAYS)  
    returns_recent = Returns(window_length=context.MOMENTUM_SKIP_DAYS)  
    ### momentum = returns_overall.log1p() - returns_recent.log1p()  
    momentum = returns_overall - returns_recent  
    # Filters for top quality and momentum to use in our selection criteria  
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  
    top_quality_momentum = momentum.top(context.TARGET_SECURITIES, mask=top_quality)  
    # Only return values we will use in our selection criteria  
    pipe = Pipeline(columns={  
                        'trend_up': trend_up,  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  
    return pipe

def select_stocks_and_set_weights(context, data):  
    """  
    Select the stocks to hold based upon data fetched in pipeline.  
    Then determine weight for stocks.  
    Finally, set bond weight to 1-total stock weight to keep portfolio fully invested  
    Sets context.stock_weights and context.bond_weights used in trade function  
    """  
    # Get pipeline output and select stocks  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    # Define our rule to open/hold positions  
    # top momentum and don't open in a downturn but, if held, then keep  
    rule = 'top_quality_momentum & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  
    # Set desired stock weights  
    # Equally weight  
    stock_weight = 1.0 / context.TARGET_SECURITIES  
    context.stock_weights = pd.Series(index=stocks_to_hold, data=stock_weight)  
    # Set desired bond weight  
    # Open bond position to fill unused portfolio balance  
    # But always have at least 1 'share' of bonds  
    ### bond_weight = max(1.0 - context.stock_weights.sum(), stock_weight) / len(context.BONDS)  
    bond_weight = max(1.0 - context.stock_weights.sum(), 0) / len(context.BONDS)  
    context.bond_weights = pd.Series(index=context.BONDS, data=bond_weight)  
def trade(context, data):  
    """  
    Execute trades using optimize.  
    Expects securities (stocks and bonds) with weights to be in context.weights  
    """  
    # Create a single series from our stock and bond weights  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    # Create a TargetWeights objective  
    target_weights = opt.TargetWeights(total_weights) 

    # Execute the order_optimal_portfolio method with above objective and any constraint  
    order_optimal_portfolio(  
        objective = target_weights,  
        constraints = []  
        )  
    # Record our weights for insight into stock/bond mix and impact of trend following  
    # record(stocks=context.stock_weights.sum(), bonds=context.bond_weights.sum())  
def record_vars(context, data):  
    record(leverage = context.account.leverage)  
    longs = shorts = 0  
    for position in context.portfolio.positions.itervalues():  
        if position.amount > 0: longs += 1  
        elif position.amount < 0: shorts += 1  
    record(long_count = longs, short_count = shorts)  

And its performance with parameters of my code.
You may compare to the results of my 37 lines code.

@Vladimir
Wonderful, thank you for that I will look closely tomorrow. I too have been working on putting all the ranking and masking into Pipeline but so far it does not seem to relate at all to my former versions. I shall look to correct the errors over the next few days And in the meantime will look with great interest at your code.

@Vladimir
By the way I quite like the idea of NOT equal weighting each month but adding the % of existing holdings and new 20% weightings up and then normalizing if they come to over 1. As suggested by @ Viridian Hawk. So as to let profits run.

Dear All,

I have been following this thread since the start and thank you all for contributing.

I am Web Developer wanting to learn Python and algo trading.

I am new to Qauntopian and want to learn as much as possible.

I have backtested the Dan Whitnabl version with fixed bonds weights strategy posted by Vladimir.

Here is the results.

Thanks
Ashish

I'm finding a huge difference between using the momentum calculation within pipeline:

ltd_to_eq_rank = Fundamentals.long_term_debt_equity_ratio.latest  
indebted = ltd_to_eq_rank.top(50,mask=universe)  
mom =Returns(inputs=[USEquityPricing.close],window_length=126,mask=indebted)  

and Pandas used on the output of the pipeline:

quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]  

I suppose it could be the fill method and limit to the fill method: I guess I need to look at Zipline to see exactly what is going on. And it has nothing to do with the deduction of the mean reversion 10 days...I have accounted for that.

There again I could be overlooking something simple and obvious.

I will revert to the research notebook to compare the outputs and understand the differences.

The source code tells me that Returns uses:

window_safe = True  

Whereas the Pandas interpretation is looking at the adjusted Close, I believe I am right in saying the Returns factor is looking some sort of normalized prices. Which I confess puzzles me. Hey ho. More work to be done. And then of course the necessity to re-code and move to Quantconnect or get my own data.

A trading strategy is just that: a trading strategy. It can easily be expressed as a payoff matrix:
$$\mathsf{E}[\hat F(T)] = F_0 + \sum_1^n (\mathbf{H} \cdot \Delta \mathbf{P}) = F_0 + n \cdot \bar x = F_0 \cdot (1 +r_m(t) +\alpha_t(t) - exp_t(t))^t$$ where \(g(t) = r_m(t) +\alpha_t(t) – exp_t(t)\).

You intend to play small, you could look at the equation this way: $$\mathsf{E}[\hat F(T)] = F_0 + n \cdot \bar x = F_0 \cdot (1 + g(t))^t$$ where \(F_0\), the initial capital can play a major role.

As long as a trading strategy is scalable, sustainable, and marketable it does not care so much about how much you put on the table (\(F_0\)). However, because it is a compounding return game that can be made to last, (\(F_0\)) will matter a lot.

Using a small stake, even if the strategy is profitable, is like really wasting a strategy's potential. Say you get a 20\(\%\) long-term CAGR, you might get something like this depending on the initial capital:

\(\mathsf{E}[\hat F(T)] = 10,000 \cdot (1 + 0.20)^{20} = 383,376\)

\(\mathsf{E}[\hat F(T)] = 100,000 \cdot (1 + 0.20)^{20} = 3,833,760\)

\(\mathsf{E}[\hat F(T)] = 1,000,000 \cdot (1 + 0.20)^{20} = 38,337,600\)

\(\mathsf{E}[\hat F(T)] = 10,000,000 \cdot (1 + 0.20)^{20} = 383,375,999\)

Putting up more initial capital does not require any trading skills, none at all.

However, it does require finding ways to either raise the cash or have it allocated to the strategy in some way. Nonetheless, 20 years wasted can also be considered as non-productive time, and that time has no reset button.

If a trading strategy has good long-term prospects, why waste its potential by constraining it to a small initial capital? Doesn't your strategy deserve better? And don't you?

@Guy Fleury
I give up. I just give up.
For heavens sake either produce some code or just let it be.
No offence meant but what is SO difficult about coming up with some code like the rest of us do?

@Anthony, there is no need to post the algo. Like I said in my previous post: I used Dan Whitnable's version that is already posted. Changed the Q500US to Q3000US. Anybody can do that. Changed the scheduled rebalancing to month_start and market_open. For each test context.TARGET_SECURITIES and context.TOP_ROE_QTY were incremented as per the # of stock column in the above chart. Almost forgot, commented out the set_slippage line starting with BT # 24 due to high slippage. That's it. No other changes.

Here is Vladimir's latest version with the same changes.

@Guy Fleury

"I have not put in any of my stuff in there either. No enhancers, no compensation measures, no additional protection. Nothing to force the strategy to do more than what it was initially designed for."

So what is the point in simply re-posting someone else's code unaltered?

I'm really not trying to be difficult Guy but you talk of enhancers and yet we never see your code for any enhancements. So perhaps you could post one of your 90% CAGR tests with code.

Unless the only enhancement is leverage over and above the few changes you refer to above? Perhaps you do not have any further enhancements to the code other than upping the leverage?

Somehow there is a disconnect here. Perhaps I am simply misunderstanding you.

But if you DO have further enhancements to Dan's code(or Vladimir's) perhaps you could post them?

If you do NOT have any further enhancements to contribute and your 90% CAGR was simply based on leverage then would you be very kind and say so?

Skip_days.

The argument is that stocks are mean reverting over 10 days hence the following code:

 context.momentum_skip_days = 10  
 prices = data.history(df.index,"close", 180, "1d")  
 #Calculate the momentum of our top ROE stocks  
 quality_momentum = prices[:-context.momentum_skip_days].pct_change(context.relative_momentum_lookback).iloc[-1]  

The argument does not seem to stand up in a few of the tests I have been running.

Using 10 skip_days and 5 stocks gives a 2261% total return
Using 2 skip_ days and 5 stocks gives a 25000% return.

So the return increases as you reduce the number of excluded days. In other words in my tests using my parameters, it is best to NOT to exclude a period in the momentum calculation. Here is an example using only 2 skipped days.

Of course my tests have used 5 stocks only.

But using 20 stocks with the attached code provides similar evidence.
10 skipped days = total return of 3845%
1 skipped day = total return of 5121%

So what's with the much talked of mean reversion period of 10 days?

Is it nonsense or am I missing something?

You will notice that leverage has not yet been sorted out in this algo. It averages just above 1 but sometimes goes up to 1.2 ish. I will get around to normalizing the weights at some stage. Which will presumably reduce returns.

Anthony,

The latest algo you have used in your research still have uncontrolled leverage problem.
Here are the results of your latest setup in my version of the algo:

----------------------------------------------------------------------------

QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('IEF');
MOM = 126; EXCL = 2; N_Q = 50; N = 5; MA_F = 10; MA_S = 100; LEV = 1.0;

----------------------------------------------------------------------------

with commented # set_slippage(slippage.FixedSlippage(spread = 0.0)).
Why cheat yourself?

Hi old & new Q friends. For me, coming back after 6 months away, it is nice to see people sharing & contributing to this algo. Thanks Chris for starting the thread. The idea of "Quality Stocks in Uptrend" is exactly what i like to try to achieve in my own very small-time personal investing. Now i ask if we can add a useful additional dimension besides "Quality" and "Uptrend" for individual stocks? Good quality stocks will KEEP going up if the demand for them continues from the overall investment community, and this is related in part to what is the investing "fashion" at the time. Sometimes it is big caps, or sometimes small caps, sometimes high earnings growth stocks, sometimes value such as low-PE stocks, or low-debt stocks, or sometimes particular industries or market segments or some other specific factors. Now anyone who has looked at Markov Chains & transition probabilities for different market regimes or different types of stocks will have noticed that generally there is a high degree of persistence in market behavior. If we identify what is currently the dominant "investment fashion" in the market at any given time, we can observe that this usually has a tendency to persist, at least for a while. As the saying goes: "A rising tide tends to lift all boats", so i now propose adding an additional component or dimension to the mix when seeking the best investment opportunity, based on THREE legs: 1) Quality, 2) Uptrend (as you already have), and now also 3) The leading "investment fashion" group at any given time.

As i see it, "Investment fashion" could potentially be any of the following:
a) Any of the "Investing Styles" as defined in Q, namely: Momentum, Size, Value, Short Term Reversal, Volatility.
b) Any of the 11 different Economic Sectors as defined & used in Q.
c) Other possibilities such as correlation with interest rates or with those commodities showing the greatest price appreciation, etc.

So, without going to code just yet (my python skills are not great), i envisage using what we already have in this thread, and then defining & ranking the currently leading "Investment Fashion" as per a), b), c) above and adding this in, to further enhance the selection criteria for the best potential returns.

Comments, please. Anyone care to try coding up this additional component or dimension of "Investment Fashion" (in an adaptive way but with some lag), to improve the mix even further?

Vladimir

Many thanks for that. There is no specific setting for "leverage" in your code is there? I have spent most of my time trying to work out the Q API to be honest and intended to fix leverage later by normalizing the weights.

I need to work through your code to understand it.

I can't match your results using your code. Could you post the full version? I get a total return of around 7000% using those settings on your code.

I am nervous about using Optimise to be honest: I would prefer to see exactly how it is done. But I daresay its harmless enough.

Yes, quite agree re slippage.

Is your MOM calculation being done within Pipeline as per the code you posted above, or outside it? I can not achieve those results calculating MOM within pipeline. My results are achieved by calculating MOM outside pipeline.

Also have you adjusted your code to re-balance weekly as I did or are you rebalancing "date_rules.month_end(days_offset = 7)" as per your code above?

Anthony,

Happy backtesting.

Anthony,

I am nervous about using Optimise to be honest...

order_optimal_portfolio(opt.TargetWeights(wt), [opt.MaxGrossExposure(LEV)])  

doing almost the same as this:

for sec, weight in wt.items(): order_target_percent(sec, weight)  

Vladimir
Very generous and thank you
A

This is how I would approach the over-leverage/shorting issue. I've also simplified the code, while (hopefully) keeping the original logic intact. (It does briefly over-leverage, probably due to a halted stock, but it's not bad.) I'm not sure why the performance lags Zenothestoic's 7301% return version so much, since I tried to match the parameters from that one. Can anyone spot the difference?

Anyways, the reason I did the above was to illustrate how if you approach portfolio construction as the creation of a todays_weights dictionary, then it becomes very easy to take it one step further and control the holding period via rolling portfolios. Here I show how to do 20 rolling portfolios that are each held for 20-days, thereby maintaining the same turnover rate as a monthly-rebalance while diversifying away from day-of-the-month overfit noise risk.

As I suspected, in the case of this strategy, it doesn't make much difference because the quarterly data is super low frequency and turnover was already ridiculously low. However, this will create more uniform turnover, instead of monthly spikes. It also allows you to hold more positions at once without resorting to holding positions with weaker alpha signals. Generally, this should lower volatility (via diversification and lower signal-name risk) without sacrificing alpha.

This technique is super useful if you are dealing with an alpha signal that fluctuates faster than the ideal holding period. It also gives you much more granularity than rebalance_weekly and rebalance_monthly. You can do 10-day or 30-day holding periods.

The only drawback is that it doesn't keep track of a position's gains/losses and always rebalances back to its original target weight. So on the long side it'll work against you on momentum stocks but to your advantage if the positions tend to mean revert.

The following is based on @Anthony's latest version where I changed the number of stocks to 240 and moved the rebalancing to the beginning of the week_start and market_open to make it the same as in @Vladimir's version. Also commented out the no slippage line.

The surprise is the smoothness of the equity curve and how it passed the financial crisis with barely a dip, considering that period's market turmoil.

For some reason, the strategy did not trade the 240 stocks as requested but only about 50-53. Nonetheless, for a long only strategy, it is a remarkable equity curve with low volatility and low drawdowns.

Now, the problem becomes keeping the smoothness of the equity curve and raising its outcome. But first, there is a need to know why it did not trade the 240 stocks.

Guy, i believe it's not trading 240 stocks and only 50 because of this constraint in the code: indebted = ltd_to_eq_rank.top(50,mask=universe

Changed N_ Q t o 60

QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('IEF', 'TLT');
MOM = 126; EXCL = 2; N = 5; N_Q = 60; MA_F = 10; MA_S = 100; LEV = 1.0;

@Vladimir

For my taste, I would not invest in the US long bond. The 7 to 10 year is already ritzy enough and may or may not offer protection in a stock market crash. In my testing, the long bond has sometimes been 90% correlated to stocks and had a VAST draw down in the early 1980s Volker interest rate rise regime.

In fact I might even down grade to IEI.

But who am I to say....

Super return anyway!

Could someone please help me in getting this code right. I am trying to weight the portfolio according the strength of rank rather than equally weighting them. I am trying to use - wt_stk = output.quality / output.quality.sum() line instead of original - wt_stk = LEV/len(stocks). But it gives me a runtime error - TargetWeights() expected a value with dtype 'float64' or 'int64' for argument 'weights', but got 'object' instead.

@Nadeem -- I think you need to do something more like wt_stk = output.quality[s] / output.quality.sum() Note the [s]

@Nadeen Ahmed

I dealt with the weighting as follows. This allows stocks to run but also allows new stocks to come in.

    context.stock_weights = pd.Series(index=top_n_by_momentum.index , data=0.0)  
    context.bond_weights = pd.Series(index=[context.bonds], data=0.0)  

    for x in context.portfolio.positions:  
        if x in top_n_by_momentum and (x.sid != context.bonds):  
            a=context.portfolio.positions[x].amount  
            b=context.portfolio.positions[x].last_sale_price  
            c=context.portfolio.portfolio_value  
            s_w=(a*b)/c  
            context.stock_weights.set_value(x,s_w)  
        if (x not in top_n_by_momentum) and (x.sid != context.bonds):  
            order_target_percent(x, 0)  
    for x in top_n_by_momentum.index:  
        if x not in context.portfolio.positions and context.TF_filter==True:  
            context.stock_weights.set_value(x,1.0 / context.Target_securities_to_buy)

    if context.stock_weights.sum()>1:  
        stocks_norm=(1.00/context.stock_weights.sum())  
        context.stock_weights=context.stock_weights*stocks_norm  
        context.bond_weights.set_value(context.bonds,0.0)  
    else:  
        context.bond_weights.set_value(context.bonds,1-context.stock_weights.sum())  
    total_weights = pd.concat([context.stock_weights, context.bond_weights])

    for index, value in total_weights.iteritems():  
        order_target_percent(index, value)  

@Viridian Hawk

This is how I would approach the over-leverage/shorting issue. I've also simplified the code, while (hopefully) keeping the original logic intact. (It does briefly over-leverage, probably due to a halted stock, but it's not bad.) I'm not sure why the performance lags Zenothestoic's 7301% return version so much, since I tried to match the parameters from that one. Can anyone spot the difference?

It is because I calculate momentum OUTSIDE pipeline. This whole muck up about Window_Safe as mentioned in one of my posts above is what makes the difference.

Prices are adjusted for splits/consolidations/dividends anyway so I don't understand why this Window_Safe junket. Also if you Window_Safe for Return() I don't understand why Q does NOT Window_Safe for a moving average.

Anyway life is too short to worry about it. I always base my systems on adjusted prices and don't muck about any further. I have no idea what this additional "normalisation" is all about and I mistrust what I can not see. Hence my dislike of optimise also.

But perhaps I am just an ignorant Luddite.

Since a) I can't be arsed to place weekly trades manually and b) there is no live trading on Quantopian, I'm off to try and replicate this on Quantconnect.

Anyone got any other ideas vis a vis the simplest way to automate?

I absolutely do NOT want to have to muck about obtaining stock and fundamental data and then load it all in the cloud. And then work out how to link it all up to a broker.

I have a habit of disappearing down rabbit holes for months, and I have a feeling if I tried to go it alone, I would never re-emerge.

I understand this may be a novice question, but I am wondering about the quality measure used. Is there some “look forward” bias due to back testing vs. live trading? The quality factor used for the algo gathers the “latest” measure (roe, ltd_to_eq, etc.). Assuming most measures are based on quarterly data the Morningstar database reports the measure for the end of the quarter. Then, the back testing algo uses that data.

For example, when back testing the algo on January 31, 2019, it would pull the measures for December 31, 2018 (the latest end of quarter). However, in live trading we would most likely not know the December 31, 2018 measure on January 31, 2019. The company had probably not reported earnings by January 31. In real time if we screened the stocks on January 31, we would most likely get rankings based on the end of the 3rd quarter 2018.

Again, I am new to platform and perhaps the Morningstar database only reports figures that coincide with reporting dates? But that would seem odd. I have very limited Python skills but am wondering what the results look like if we lag the fundamental measures by a quarter? If my description of reporting date vs. database date is correct, I believe the way to accurately reflect live trading is to use the “known” measures. In some cases, it would be the “latest” as used in the algo. But in some cases, the correct fundamental measures would need to be lagged one quarter.
Your feedback is welcome. Thank you.

@John Sawyer, Quantopian timestamps data according to when it was actually available. On historical data that predates Quantopian's real-time collecting, I believe they add a conservative delay to ensure there is no look ahead bias. There is no need to delay data by one quarter.

Part of the reason why the original version over-leverages is this logic:

    for x in top_n_by_momentum.index:  
        if x not in context.portfolio.positions and context.TF_filter==True:  
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))  
            print('GETTING IN',x)  

New positions are given a 5% weight, but existing positions are allowed to grow above their original 5% allocation. Since most positions are expected to generate a return (gain weight), the sum of the 20 position weights will be greater than 100%.

The version I posted solves this problem by adjusting all positions back to equal-weight at each rebalance.

@Zeno, I figured out the discrepancy between my version and yours. Looks like momentum works just as fine inside of pipeline as out, and there is no window safe issue. What I missed was the change from sma50/200 for the trend filter to 10/100. That change contributes significant improvement! If in addition I allow over-leveraging as I described in my previous post, my results are starting to look pretty close to yours.

While I'll generally agree that a bear market filter is a good idea, I'm concerned that as soon as you start tweaking those settings, you're drifting into curve-fitting territory. Finding the ideal historical SMAs probably isn't going to be any more predictive than choosing one at random. Intuitively there are so many variables -- VIX, interest rates, volume, P/E, sentiment -- that simply looking at SMA crossovers is more like looking at the symptom than the cause. The reality is that it's going to behave erratically. Perhaps there is a more fundamental metric we can use to determine when high lt debt-to-equity companies start to underperform.

Weird. I must check inside the pipeline version again. Totally agree on curve fitting. I will trade this but it WILL be very eratic. Pity we can't see what happened in the tech crash of 2000.

As you can see I solved over leverage in a different way. I placed my code in this thread a few entries above. My version was more of a trend following version. No equal weighting.

@Viridian, there are still leverage spikes, even in your version. They occur when exiting from bonds. As if, at rebalancing, not all the bonds are sold. Note, that I tested with the no-slippage line commented out to give it a more realistic outcome. Also, initial capital was at $10M, and the number of stocks at 100.

@Guy -- At $10m book w/slippage enabled it's not surprising that not all orders are getting filled, especially as the account grows. If it's limited liquidity on the bond ETF that's causing problems, you could distribute your bond position between more liquid ETFs like TLT. Also, I'm aware my version over-leverages occasionally due to halted stocks as well, but I figure those aren't worth worrying about, since they aren't inflating the backtest returns in the same way over-leveraging on non-halted stocks does. Another consideration, many companies in the Russell 3000 aren't liquid enough for a $10m-$1t account, so I would do some market-cap weighting if you want to trade such a large book, though of course that will hurt returns. I wouldn't trade $10m with this strategy, but if you were to do so, it would be wise to put in some extra execution logic in order to minimize slippage. Alpha decay is very low, so you can take your time legging in and out of positions if need be.

Moved trade execution to market_open().
QTU = Q3000US(); MKT = symbol('SPY'); BONDS = symbols('TLT', 'IEF'); MIN = 1;
MOM = 126; EXCL = 2; N = 5; N_Q = 60; MA_F = 10; MA_S = 100; LEV = 1.0;

EXCL = 2;

I'm finding Zeno is right, best to eliminate the momentum exclusion altogether.

Moved trade execution to market_open()

Is the idea that in live trading you would use OPG/MOO orders? Otherwise spreads are too wide at open and Quantopian's fill model is going to be much too generous, especially on the small caps.

TBH, yes. I am intending to do just that. I'm currently mucking about trying to get Quantconnect up and running. It's not laziness but I know I would loathe having to put the orders in manually. I also want to work on a few other filters (other than indebtedness) to see if there is something else I can complement this with. Momentum again, but hopefully with a different filter to ensure I end up with different stocks.

We've been juicing this algorithm's results without a hold-out. I'm going to put it on my calendar to check back in 6-months to a year and see whether any of what we did improved the OOS performance.

A possible alternative is to go back and year by year work out parameters which produce "average results". In a sense that would be OOS.

Probably as good as looking at the optimal parameters in six months time.

In other words if you want to trade 5 stocks, keep that as a constant and fiddle around to find the average best days to re-balance and so on. Don't pick the worst or the best but somewhere in the middle.

We know well of course that the future will only vaguely resemble the past, and in that regard, no amount of hold out guarantees future results. No does the use of average parameters of course, but perhaps it may prove a useful exercise.

So you might end up trading parameters which have produced results off the very bottom and off the very top. Somewhere in the middle. You may have noticed that with some of the parameters, gradual changes occasioned quite smooth correlated moves in the equity curve.

For instance reducing the momentum exclusion from 10 incrementally down to 0 produced a steadily increasing equity curve. So its probably as worth fiddling around with this as waiting six months.

@vladimir - A quick question. Are those results without default slippage and transaction cost? I used same variables with weekly re-balance and commented out 0 slippage cost line. I am getting only half of your return. 15923% to be precise.

@Nadeem,

Are those results without default slippage and transaction cost?
With commented # set_slippage(slippage.FixedSlippage(spread = 0.0)). Why cheat yourself?
I did not change anything in trade().
What is your definition of quality in pipeline?

wow, so i m getting only half of your result. I must be missing something crucial. I am using algo from Anthony. It is using only the following

indebted = ltd_to_eq_rank.top(60,mask=universe)

and then choosing top 5 momentum from those 60.

amazing to find signal in this sort of noise.

Mr V Hawk
Would you be kind enough to interpret that chart for me? I am not clear what it is purporting to show?

:-)

Blue dots are forward 60-day gains, red dots are forward 60-day losses. Size of dot is size of gain/loss. X axis is the raw debt-to-equity value, and y axis is the 125-day return (which this algo has been using as "momentum"). First chart shows how noisy this data is. Second chart is more in the region the algo acts on and you can kind of see more blue in the top quadrant. I was just curious if viewing the data this would give any insights. I don't think so, but mostly it's just pretty to look at.

Many thanks. Back to square one then I guess and either taking a punt or waiting through an OOS period. No guarantees either way of course!

@Viridian, your two charts are what you should have expected to find. The same kind of results as was found in the late '60s and thereafter, that we still have to contend with today.

However, those two charts do say a lot. The short-term forward mean return is close to 0.0. However, even from a visual inspection, we can see a slight upward edge from the mostly bell-shaped distribution. There is a signal in there, but it is faint. Nonetheless, it does carry the long-term market average upward drift.

The chart also says how difficult it is to capture the positive side of those dots since a lot of randomness will have to be addressed. And by the very nature of the distribution, high hit rates will be difficult to achieve.

Discriminating those dots becomes a statistical problem in a tumultuous and quasi-unpredictable sea of variances.

Why choose a 60 day return? Might not 20 days be more appropriate for monthly trading?

@Viridian, in reference to your post, yes, agree. I had the same observations.

I am testing the limits and uses of this type of trading strategy. Currently, my interest is two-fold. How does it behave when you scale it up? And, a much more interesting question: how much of it can you anticipate?

This strategy can be viewed as a fixed fraction of equity position sizing with equal weighing monthly scheduled rebalancing, like many on this site. Its move to safety is a switch to bonds on a SPY 50-200 SMA crossover. On small quantities of stocks (5 to 20) and on relatively small initial capital (10k to 100k) it appears to be doing fine. However, most of those simulations here are on the same 5 or 20 stocks as they are using the same stock selection process. Nonetheless, the bet sizes start relatively small (2k to 20k for the 5-stock scenario and from 5k to 20k for the 100k scenario). The bet size varies from 5% to 20% of equity which, in general, would tend to make the proposition riskier. It could be viewed as a high portfolio concentration, especially at the 20% bet sizing level.

The nature of the problem changes as you increase the initial capital further. And if you want to play at higher levels, you will need your trading strategy to adapt to such an environment.

The ability of a trading strategy to scale up should almost be a prerequisite. You want to know if it can scale or not since it is designed to grow, and at an exponential rate at that. Also, since the strategy's stock selection process always results in the same ranking for the same stocks, you want to know how it will handle more stocks and/or a different selection in order to spread market risks to more than just 5 to 20 stocks.

The projected number of trades using the top 100-ranked stocks, as per the chart in a prior post, was 11,320. The backtest using your version of the program came in with 10,517 trades. Almost within its anticipated range of 10,613 to 11,497 over the trading interval (that is within 1% of the lower range's reach). Likewise, the projected CAGR was from 15.10% to 15.94%. The backtest came in with 15.03%.

What is remarkable here is that we can make such a projection (I have formulas and equations for that), not for a week or a month ahead, but for, in this case, a 16.7-year interval. And, these projections fell pretty close to the actual simulation result (look at the cited post above).

Putting these projections at play for big long-term targeted return funds opens up an interesting door for this type of strategy. As a matter of fact, any trading strategy where you could project that far into the future with a reasonable approximation could greatly benefit from those structural scaling capabilities.

Hello I would like to give my little contribution to this also by putting the stock picking logic (first selection by ROE then more selection by momentum) inside each single sector, and the logic of sectors selection is basically this one.
It does not make huge money but I like the idea of sectors diversification, and also i like to compare fundamentals inside each sector separately beacuse i think it gives more robustness to the algo, and maybe someone could improve this.
Hope to be not too much “off topic” in regard to the original algo.

Sorry in advance for a newbie question but I cannot seem to find this answer anywhere. How do I get the current stock symbols for this model ? Do I have to write some code to output the current symbols ?

Thanks.

@Andy, probably the easiest is to run a "full backtest" through the most recent date and navigate to "Activity" -> "Positions" and you'll see the positions organized by date.

Sorry for some more newbie questions.. When I ran the backtest on the original Algo I noticed:

  1. It is a bit confusing to see that the cash becomes negative (e.g. -$92,363.61 on 2019-11-18). Why is that?
  2. There are a couple of leverage spikes of about 2x (e.g. on 04/04/10). Why does that happen? Does that mean we need to borrow (e.g. on margin) $100,000 for those days?

EDIT: 3. Oh, and one more thing. In actual trading, since the backtest only gives the previous days data we need to do the actual rebalancing the next day with our broker. I wonder if the backtest results of the Algo would change if this would be incorporated in the Algo? For example: Do the evaluation at closing at the last day of the month, and then do the actual rebalancing the following day (e.g. at 12 PM if the backtest data would be available then).
I don't know enough of Quantopian/Python yet to test if this would make a difference in performance.

(p.s. I am used to Tradestation and in strategies with daily data, I would evaluate after closing and place potential orders for the opening at the next day).

I have noticed that if you comment out the maket cap filter in pipeline the strategy performs better.
What are the main risks on doing that that one can encounter during live trading?

@Chris, @Joakim

according to “Berrnstein, style investing, 1995“ (book) the risk/return characteristic of quality measures in stocks favor low quality.
In their example they used the S&P quality rating, A+ to D, were A+ has the highest quality.

From 1986 to 1994
A+ achieved a mean return of 9.57% (yearly) with a std. dev of 14.83%.
C/D got a mean return of 19.28% with a std.dev. of 27.66.
The other ranks behave accordingly (B 13.91%)
(Source: Merrill Lynch Quantative Analysis)

This explains why the changes from Joakim boosted the performance.
It might be probably a good idea to combine low quality with momentum.

@ carsten

Generally saying: "combine low quality with momentum" is wrong.

It ist widely known (research by novy-marx, aqr, ...) that the quality factor (long/short) is a source of equity outperformance and it might work long only as well. As far as I know low D/E (leverage) is not the the favored choice for the quality factor, and if it used than only as part of a larger quality composite. Norges Bank has a study showing that. Most often profitabilty ratios are used for quality.

By the way high D/E in combination with small size and value is used by Rasmussen to replicate a cheaper and public version of private equity investing.

@rogoman

yes, this is what I read a lot.

BUT interestingly other publications state the opposite.
The book from Richard Bernstein was quite interesting in that point.
It's a bit old, but they show that low quality can be a source of alpha, as long as borrowing cost is low.
It always depends in which market cycle we are.
(The author was the Head of quantative equity at Merrill Lynch)

As an example, Tesla is not making any money at the moment but the return for their shareholders was not that bad since 2010.

I got the book, because I wanted to understand why my small, value, momentum did not work.
(it performed great some years ago in backtest but actually does not work the last years) So far I liked the book, unfortunately only in analog format, which is difficult to read...I like to read during commute
https://www.amazon.com/Style-Investing-Unique-Insight-Management/dp/047103570X

@ carsten

Factor strategies don't work all the time, they work on average. That might be a reason why they still work. If they had worked all the time, they would have been arbitraged away and we would not be talking about these factors/styles whatever you want to call them anymore.

Also: I dont know what you small, value, momentum strategy looks like but if you bought a couple of stocks of these characteristics, dont be surprised if they dont work at all. Again this is statistics, this stuff works on average not all the time and not all stocks or all coupe of stocks.

I would recommend this book as a starter : https://www.amazon.com/Your-Complete-Guide-Factor-Based-Investing/dp/0692783652/ref=sr_1_1?crid=3OPUCZY5UJ5JQ&keywords=factorbased+investing&qid=1577557634&sprefix=factor+based+%2Caps%2C236&sr=8-1

I'm intrigued by this strategy, and started messing around with it a bit in my spare time. Nothing to really improve on, but in the version I am testing, using the universe Q1500US performed better over the long-run. Still can't get close to @Vladimir 40.1% annual return, but closer than it was (Still seems to have a leverage issue, as cash drifts to $-65,718.98 on 2018-08-22 at one point even though leverage is at 1 [context.portfolio.cash], still looking into that...I think it is because some orders failed to fill...but not sure). 2018-07-20 13:00 WARN Your order for -6189 shares of OEC has been partially filled. 4165 shares were successfully sold. 2024 shares were not filled by the end of day and were canceled. But I would think order_optimal_portfolio should realize the sale didn't go through and not over order...anyone have insight into this?

EDIT: The negative cash is for sure from the unfilled orders - if I use set_slippage(slippage.FixedSlippage(spread = 0.0)) then it only goes to about $-300 starting with $10,000 which is definitely a weak point of only buying into 5 stocks - once you have significant capital you'll have trouble filling the orders in real life.

I'm interested in looking at optimizing the code with something like this: https://ntguardian.wordpress.com/2017/06/12/getting-started-with-backtrader/ (look at the Optimization section). Does anyone know if that is possible in Quantopian? It would be great to test all the variables and come up with the best outcome (instead of manually changing the variables and backtesting).

Here's a notebook based on @Vladimir notebook for some more stats

@Vladimir.
Sorry for being a novice about this, but did you publish the modified Algorithm to generate the notebook with "Annual return 40.1%"?

Highest I can get so far is 18915.42 % - here's code for anyone else wanting to tinker with it.

Well, I got there...32,051.054%
I think in a different way than @Vladimir but still an interesting algorithm. Still have an issue with leverage, at one point cash went -$16,016 but given that the portfolio value was at $2.4 million by that point (starting with $10k in 2003) I don't think it effected things too much, but I could be wrong. But the trading day really seems to have a large impact as @Peter Harrington showed.

So while I'm sure this isn't real life it still is a very interesting algorithm, especially because of how simple it is. Thanks for posting it @Chris Cain and for all those who worked on it as well!

@Nathan,
Would you mind sharing your code for the 32,051.054% return?
TIA!

One more way to evaluate company's quality, Piotroski Score https://www.investopedia.com/terms/p/piotroski-score.asp

My latest article: Financing Your Stock Trading Strategy is about the trading strategy discussed here. I built it based on Dan Whitnable's version as presented above.

I view the 12 simulations presented in my article as an exploration phase of the limits, strengths, and weaknesses of that particular trading strategy. The principles and trading methods used could be applied to many other strategies.

It is not a one solution does it all. I always look at trading strategies as a matter of choices, trading methods, preferences, and risk averseness. But, that does not mean that no one can design a trading strategy that can outperform market averages. Trading is like any other business, there is always a cost associated with it and there are always risks to be taken.

I wanted to share this new article, not only for what it does, but for what it conveys as well. There is math behind a strategy's structure. It is expressed concisely in the progression of the payoff matrix equation presented which served as backdrop for these simulations.

There are some screenshots to decorate the article. Here are some of its headers:

  • The Basic Portfolio Equation
  • Reengineering For More
  • Financing Your Trading Strategy
  • Some Fundamentals Might Not Do What You Think
  • Scalable Strategy
  • Overriding The Ranking System
  • Increasing Leverage
  • Testing Methods
  • An Extended Strategy Payoff Matrix
  • Stopping Times

Hope it can help some by presenting a different perspective.

Article link: https://alphapowertrading.com/index.php/2-uncategorised/354-financing-your-stock-trading-strategy

awesome result @Nathan. Thank You for sharing and continue to motivate members to try and push the limits of algo. I think you have achieved the highest and most interesting result till now. The sharpe ratio of 1.48 is great!!!

Regarding the selection of companies based upon high dept and high roic: I guess the reason why this strategy shows such strong returns, especially in the last years, is because of the influx of cheap money, caused by the rate cuts. My opinion is, that this strategy will probably only work this well in the current type of market enviroment. The backtest hints at this, since the big returns only started to come in after around 2009. However, this is probably one of the better strategies to exploit the availability of cheap money without taking excess risk. I'm really intrigued by this approach!

@Kristof, a lot of the profit generated in that strategy comes from simply holding, on average, for a duration of about 5 months as if it was more like a participation prize. You get the profit because the portfolio had full exposure and the average 5-month position was positive in a generally up market.

There is some downside protection built in the selection process and the structure of the trading procedures on top of its declared move to safety bond switcher.

It becomes part of the reason why the strategy can benefit over the long term and why applying some leverage can benefit the bottom line as long as it is all paid for by the trading strategy itself which was demonstrated in my versions of this program in my previous posts.

This implies that most of the benefits of this trading strategy come from its structure since all the trades are made by the periodic rebalancing which comes on your preset schedule and not necessarily on the market's high or low price points. In fact, the program does not know the state of the market when rebalancing occurs. Its binary state is determined by an arbitrary and self-defined notion of a trend.

Nonetheless, I see alpha generation that can easily outpace many other trading methods. The 12 simulations presented in my last post were done using a slightly different set of stocks for each one of those tests. And still, the strategy successively outperformed in each of those simulations. One of my next steps will be to change the stock selection process altogether and see how the structure behaves.

For these reasons, I have a different perspective on the following:

“The backtest hints at this, since the big returns only started to come in after around 2009.”

The excess return was available almost from the start and for the duration of these 16.7-year simulations. In all those simulations, most of the equity came from the last few years simply because it is in a compounding return kind of game, and all it shows is this power of compounding over time.

However, the fundamentals on which the stock selection is made could benefit from better criteria. In my simulations, I downgraded the rankings of all fundamentals except one (which I will be attacking soon), and still obtained impressive results.

By assigning True to the Do_More option, the performance increases considerably. That option is part of the trading methodology. It has nothing to do with fundamentals, but everything to do with how you intend to play the game, or how aggressively you accept taking on incrementally risks. There, like in any other strategy, it becomes a matter of choice and preferences.

I am currently exploring how far this strategy can go. I've increased the number of stocks to trade to 300 over the same 16.7-year trading interval. The purpose is to reduce the impact of each bet. Each stock. while in the portfolio. will account for 1/300 of equity (0.33%). If a stock ever goes bankrupt, all the damage it could do will not exceed 0.33% of the portfolio. Due to the stock selection method, this trading strategy, even limited to the top 300 ranked stocks at anyone period, will nonetheless trade over 2,400 different stocks.

In its current state, the strategy makes over 100,000 trades. I could double that easily by setting to True another program option which has not been shown. The number of trades, just as the number of stocks traded are high enough to show, on average, statistical significance even after paying leveraging fees which are not negligible, but still part of the expenses of doing more business.

I didn't change much to the algorithm except I added another quality filter called ev_to_ebitda-( This reflects the fair market value of a company, and allows comparability to other companies as this is capital structure-neutral.), which I remember seeing from another algorithm. This combines the factors of Value and Quality together (somewhat).

I think if there is a regime switcher(Hidden Markov Model) so that different styles( ie Low Risk, Value, Volatility, Momentum, Quality, and Small Cap) can be cycled between that would be interesting or even combining each other (ie Quality- Momentum(Pretty much this thread), Value -Momentum ( https://www.quantopian.com/posts/value-momentum-and-trend)) can also be interesting.

Hi @Viridian,

illustrate how if you approach portfolio construction as the creation
of a todays_weights dictionary, then it becomes very easy to take it
one step further and control the holding period via rolling
portfolios. Here I show how to do 20 rolling portfolios that are each
held for 20-days, thereby maintaining the same turnover rate as a
monthly-rebalance while diversifying away from day-of-the-month
overfit noise risk.

Thanks for sharing, very useful!

One question, when you moved the stock selection into Pipeline with progressive masks and used the Returns Factor, why the .log1p()?

Thanks in advance.

why the .log1p()?

Because you can't linearly subtract percentage changes. If something for example goes up 60% ($100+($100*0.6)=$160) and then down 40% ($160-($160*0.4)=$96), it is not the same as being up 20% ($100+($100*0.2)=$120) overall.

I think converting to log solves this problem.

I have been trying to modify this algorithm to have a dynamic TARGET_SECURITIES based on the context.portfolio.portfolio_value. For example:

TARGET_SECURITIES = math.floor(context.portfolio.portfolio_value / 100000.00)

I have no idea if this is a good or bad thing to do but I can't seem to make it work because TARGET_SECURITIES is used in the Pipeline code, which appears to only be run once? Anyone know of a way to code this?

Hi @Vladimir (and other contributors that have been playing with this great contribution from @Chris),

I've been playing with your optimized version of Dan Whitnable code. I've used that one as in terms of code structure is super clear.

I'm far from achieving anything closer to your results and I just would like to know if I'm missing something big or it's just a matter of parameter optimization.
I have a doubt about a couple of parameters you mentioned, it might be related to that (among other things I guess):

I'm using:
date_rules.month_end(days_offset = 7)
time_rules.market_open()
QTU = Q3000US()
MKT = symbol('SPY')
BONDS = symbols('TLT', 'IEF')
MOM = 126; Momentum lookback
N = 5; Number of stocks to finally trade
N_Q = 60; Number fo stocks filtered by the quality factor
MA_F = 10; Fast moving average
MA_S = 100; Slow moving average
LEV = 1.0; Leverage passed as constrain to the optimizer

Regarding my doubts, what does parameters mean?
MIN = 1; Is this related to bonds? How?
EXCL = 2; Is this related to the momentum skip days? If I use 2 instead of 10 performance drops significantly.

I'm attaching the backtest with the code and the notebook.

Thanks in advance!

And here is the notebook with my results...

Thomas Wiecki recently posted about how linearly combining factors produces no additional alpha over the individual factors, which got me to thinking about the hypothesis behind this strategy. Here the authors have used progressive filtering/masking to implement a factor combination hypothesis, and it appears to work -- momentum (which is mostly useless on its own) does seem to significantly improve the quality factor.

I realize that there was no intention here of satisfying the Quantopian Allocation criteria, but I started to wonder how one would go about implementing a combination hypothesis such as "quality companies in uptrend" in a fashion that ranks the entire QTU universe of stocks. As Tomas explains, quality.zscore + momentum.zscore obviously does not do it (that only gives you the average between the two factors, not the additive quality of them working in symbiosis).

Any ideas?

(My hunch is that in this particular case there is too much noise once you stray from the extreme factor values, so it won't work.)

Hi @Viridian,

Definitely don't have an answer to your question but could add that in this book, to combine 2 factors and test a strategy using 5 quintiles, what the author does is:
1- It ranks all companies in our Backtest Universe by "Main Factor". From lowest to highest, in case we want the lowest values in the top quintile.
2- It selects the top 20% of this ranked list. The 20% of companies with the lowest "Main Factor" values. If we start with 2.000 companies, this step should select about 400 companies.
3- It then ranks the 400 companies that passed the "Main Factor" test by "Secondary Factor", again from lowest to highest supposing lowest is better.
4- It selects the top 20% of this ranked list- those with the lowest "Secondary Factor" values. If we started with 400 companies in step 2, we should end up with about 80 companies at the end of step 4.
5- Steps 1 to 4 are repeated until we have formed portfolios for the top quintile for each month (or whatever time frame we are interested in) to be tested.

This emphasizes the "Main Factor". Somehow, reminded me what is being done in this strategy, the progressive filtering you mentioned.

Hey All,

This is a fantastic discussion and a good topic brought up by Viridian.

What we are discussing here, in essence, is what is sometimes called “sequential” vs “non-sequential” ranking/filtering methodologies when it comes to quant factors.

Non-sequential basically means you use all stocks in a universe and rank them (using some methodology) in all the factors you want to use. An example would be taking the Q3000 universe then ranking each stock by a quality factor and a momentum factor then taking the top N based on the combined rankings.

Sequential is what we are doing here, and what the original algo did as well. Sequential means you rank by one or more factors first, filter the universe using that, then filter it again using the next factor.

In the original algo, we first filtered by quality then by momentum.

Some interesting things to note about this method. First of all, the first factor to filter by will have a larger impact on the portfolio. As such, I view “Quality Companies in an Uptrend” strategy to be mostly a quality strategy. It then uses momentum as a secondary factor which helps performance. Time-series momentum (trend following) is also applied in our trend-following regime rule, this is most to manage risk.

In my research, I have had much better success with sequential factor strategies as opposed to non-sequential.

Sequential strategies is also useful when you are trying to create a portfolio of strategies to take advantage of the diversification this offers. Again, keep in mind that the first factor you filter by in a sequential strategy will have the most impact on the strategy.

For example, maybe you have one sequential strategy that uses Value as a first factor then another sequential strategy that uses Quality as the first factor. It stands to reason that these should have lower correlation to each other than two squential straegies that use quality as a first factor. My research has shown that to be true. Combining them into a portoflio can then less risk and lead to better risk-adjusted returns (Sharpe).

Great discussion here, thanks again to all that have investigated this algo, made changes, cleaned up code and provided thought leadership.

Chrisopther Cain, CMT

@Marc,

Try to run your code with this parameters:

QTU = Q3000US();  
MKT = symbol('SPY');  
BONDS = symbols('TLT', 'IEF');

MOM = 126;       # Momentum lookback  
EXCL = 2;        # Momentum skeep days  
N_Q = 60;        # Number quality stocks  
N = 5;           # Number to buy  
MA_F = 10;       # Fast Moving Average  
MA_S = 100;      # Slow Moving Average  
LEV = 1.0;       # Target Leverage  
MIN = 1;         # Minute to start trading

Hi @Vladimir,

Thanks for the clarification. The difference is impressive. Specially when using just long term debt to equity as the quality factor.

Very interesting conversation. And just want to reiterate what @Zenothestoic said a while back:

"Incidentally it is good to see some [more] ideas coming through which
do not follow the stifling criteria for the Quantopian competitions.
It makes for a much more interesting forum. I was getting very [fed] up
with the 'neutral everything' approach."

Haha. It is a bummer that Quantopian has shut off all live-trading and even paper trading. I've tried porting this to QuantConnect, but have had little success in mirroring the results. If anyone is interesting in exploring that, shoot me a message.

Hi @Nathan,

Have you checked pylivetrader? It's a port of Zipline for Alpaca. I was messing a bit with it a while ago for paper trading. However, as far as I know, there where some core version upgrades to the Alpaca API and not sure if the port is still supported.

If you are a US citizen you can use Alpaca for proper trading too.

@Marc Thanks, that is interesting. I had seen it but hadn't looked too in depth. I'll try and see if I can come up with something that works.

@Marc Looks like there is no access to the MorningStar Fundamentals?
https://github.com/alpacahq/user-docs/blob/master/content/alpaca-works-with/quantopian-to-pipeline-live.md

The Quantopian platform allows you to retrieve various proprietary
data sources through pipeline, including Morningstar fundamentals.
Previously, IEX was used by pipeline-live to supply equivalents to
these, but recent changes to the IEX API have made this less possible
for most use cases. The alternative at the moment is the Polygon
dataset, which is available to users with funded Alpaca brokerage
accounts and direct subscribers of Polygon's data feed. If you want to
get started with Polygon fundamentals, please see the repository's
readme file for more info on what Polygon information is currently
available through pipeline-live.

Did you find any way around that?

Hi @Nathan, I knew there were several changes since I used it but was not aware of this one.
Apparently with Alpaca API v2 you can access fundamental data but haven't checked it.

Durability And Scalability

Two of the most important traits of any stock trading strategy should be: durability and scalability. The first so that the strategy does not blow up in your face during the entire trading interval, and the second so that a portfolio can grow big.

A stock trading strategy should operate in a compounding return environment. The objective is to obtain the highest possible long-term CAGR within your own set of constraints.

The portfolio's payoff matrix equation is quite simple:

\(F(t) = F_0 + Σ (H ∙ \Delta P) = E[F_0 ∙ (1 + r_m)^t]\)

where \(r_m\) is the average expected market return and where the final outcome will be shaped by \(F_0\) and \(t\). One variable says what you started with while the other how long you managed it. It does not say what strategy you took to get there, only that you needed one. It could be about anything as long as you participated in the game (H ≠ 0).

This is a crazy concept: you can win, IF you play. You have no control over the price matrix P, but you do have total control over H, the trading strategy itself. You can buy with your available funds any tradable stock at any time at almost any price for whatever reason in almost any quantity you want short of buying the whole company.

You already know that as time increases (20+ years), \(r_m\) will tend to be positive with an asymptotic probability approaching 1.00. The perfect argument for: you play, you win, and in all probability, you get the expected average market return over the period just for your full participation in the game.

The Problem Comes When You Want To Have More!

You might need to reengineer your trading strategy. For example, the same strategy as I last illustrated in this thread was put to the test with the following initial state: $50 million as initial capital, same time interval (16.7 years), and 400 stocks.

The first thing I want to see when doing a backtest analysis is the output of the round_trips=True section which tells the number of trades and the average net profit per trade. The reason is simple: the payoff matrix equation also has for expression:

\(F(t) = F_0 + Σ (H ∙ \Delta P) = F_0 + n ∙ x_{bar} = F_0 ∙ (1 + r_m)^t\)

where n is the number of trades and \(x_{bar}\) is the average net profit per trade. Therefore, these numbers become the most important numbers of a trading strategy. Whatever you can do to increase those two numbers will have an impact on your overall performance as long as \(x_{bar}\) is positive. Because n cannot be negative, a positive \(x_{bar}\) is a sufficient condition to have a positive rate of return.

The following chart is part of the round_trips=True option of the backtest analysis.

Backtest Section: round_trips=True

The numbers are impressive. One, it shows that 400 stocks can indeed make a lot of trades over the years. The average net profit per trade increased with time to make its average a lot higher than where it started. It is understandable, the average net profit per trade is on an exponential curve due to the inherent structure of the trading strategy with the last few years having the most impact.

The number of stocks to trade might have been limited to 400 at any one time (or 0.25% of equity), but due to the nature of the selection process, some 2,698 different stocks were traded over those 16.7 years with an average holding period of 126 days (about 5.7 months' time).

Instead of using bonds, I made the portfolio go short in periods of market turmoil. The strategy still managed, on average, to generate profits on those shorts.

The gross leverage came in at 1.50. However, the net liquidation value which is a rough estimate net of leveraging costs was more than enough of a reward to warrant going for it.

Why can such results be achievable? The reason is simple: compounding.

Every dollar of profit made is being compounded repeatedly, again and again. With the skills you brought to the game, you changed the above equation into:

\(F(t) = F_0 + Σ (H_a ∙ \Delta P) = F_0 + n ∙ x_{bar} = F_0 ∙ (1 + r_m + \alpha_a)^t\)

And it is the alpha you added to the game that is making such a difference, especially since it is also compounding over the entire time interval.

If you do not push your trading strategy, how will you ever know its limits? Those limits are the ones you do not want to exceed. And most of those limits might be due to your preferences and your averseness to losses. You want to do more than the other guy, then you will have to do more (a recurring mantra in my books).

Even if the numbers are big, it is still not the limit. For instance, increasing gross leverage by 5% in the above strategy would result in a gross leverage of 1.57.

Evidently, there would be progressively higher leveraging fees to pay. However, it would result in higher overall profits (adding about 6B more to net profits compared to the 0.5B more in leveraging fees). Max drawdown increased from -29.05% to -30.21%, while volatility went from 0.27 to 0.28. On the Sharpe ratio front, it stayed the same at 1.46 while the beta went from 0.44 to 0.46. The point being that if you could tolerate a drawdown of -29.05% why would you not stand for a -30.21% drawdown when you have no means to determine with any accuracy how far down the next drawdown will be?

The above numbers analyzing the state of the portfolio were marginal incremental moves except for the added overall profits. All of it is extracted by the same trading strategy where it was “requested” to do more. It incurred higher expenses, for sure, but it also delivered much higher profits (12 times more than the added costs).

Now, the strategy does face some problems in need of solutions. One is when going short. It does so on the stocks that the strategy considers as the best prospects for profit. That should be changed to a better short selection process. Another problem is the bet size. It is on an exponential curve and at some point will trade huge number of shares even if each stock will only get 0.25% of equity.

Therefore, as the strategy progresses in time, there will be a need for a scaling in and out of position procedure. I would prefer one of the sigmoid type. This has not yet been designed in that strategy, but I do see it will be needed, not so much in the beginning, but it will get there. So, I should plan for that too and have the strategy take care of those two potential problems.

However, there is more than ample time (years) to solve the second one.

Understandably, no one should be surprised if I am not providing the code.

@Guy,

Just a friendly reminder of the below post from Jamie McCorriston. I believe the purpose of this post was for collaboration - not for someone to write a long monologue on how they improved the strategy without actually sharing the code.

@Guy Fleury: Multiple participants in this thread have expressed
frustration with the sharing of screenshots instead of attaching a
backtest. Please refrain from sharing screenshots built on top of the
shared work in this thread. You are entitled to keep your work
private, so if you don't want to share, that's fine. But please don't
share screenshots in this thread as it seems the intent of the thread
is to collaborate on improving the algorithm.

@Joakim (and @Guy),

I'm no mathematician, so the stuff @Guy is posting doesn't make much sense to me. But I do believe the essence of what he is saying is that this strategy can be scaled to use more than 5 stocks and give amazing returns if someone is willing to use leverage. And while he is talking above me (and maybe most on here), that was interesting information. Liquidity in 5 stocks is going to be an issue in real-life, so by adding a leverage of 1.5 with 50 stocks, you can still get good returns if, for example, you start with $5k instead of $10k (and keep the other $5k in the bank cover yourself for a margin call). I'm not really familiar with using leverage (so I might not be understanding all this correctly). And that way with 50 stocks at 1.5 leverage I can backtest a return of 30,000% turning $5k into $1.5mil (2003-2019) verses about 5,000% return on $10k with 50 stocks resulting in a portfolio of $500k.
That is interesting.

Also, the point about dealing with bear situations in a more intelligent way is true - that would increase the return for this algorithm. So even though he didn't provide code, he did provide an idea that we could add to the strategy an benefit from.

I personally don't like leverage, but maybe others do. So I wouldn't say @Guy hasn't contributed. True, no code (except slightly modified from others - but really that's all I posted as well), but he does have ideas.

Some think that this strategy is operating on ranked-fundamentals. Well, not so much. It is mostly playing market noise.

The stock selection process is just there to pick 400 stocks. Changing the ranking method will give quite different results as was illustrated in my last article. It is understandable especially in the scheduled periodic rebalance procedures where the weight's 7th decimal can be the deciding factor. When one of the weights changes, it will prompt all other weights to “readjust” as if in a domino effect. The rebalance occurs not because the fundamentals changed (their values changed about every 3 months), but because the 7th decimal of one of the weights changed. And that becomes playing on market noise a lot more than playing on ranked-fundamentals.

When designing a trading strategy we should look at where it is ultimately going, especially in a compounding environment where time might turn out to be the most important ingredient of all. It is your trading strategy H that is running the show, so do make the most of it.

Such a trading strategy is designed to accommodate large institutional sized players and the very rich to make them even richer. Nonetheless, looking at the equations presented and my latest articles, the strategy can be scaled down just as it was easily scaled up as is illustrated in the last shown equation in my article.

You want to find out more like equations, explanations and charts, follow the link below:

https://alphapowertrading.com/index.php/2-uncategorised/355-durability-and-scalability

Okay I am pulling my hair out, can someone tell me the difference between the following:

def make_pipeline():  
    top_quality = quality.top(N_Q, mask=universe)  
    top_quality_momentum = momentum.top(N, mask=top_quality)  
    pipe = Pipeline(  
        columns={  
            'trend_up': trend_up,  
            'screen': top_quality_momentum  
        },  
        screen=top_quality_momentum  
    )  
    return pipe

def rebalance(context, data):  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    rule = 'screen & (trend_up or (not trend_up & index in @current_holdings))'  
    stocks_to_hold = df.query(rule).index  

VS

def make_pipeline():  
    top_quality = quality.top(N_Q, mask=universe)  
    pipe = Pipeline(  
        columns={  
            'trend_up': trend_up,  
            'score': momentum  
        },  
        screen=top_quality  
    )  
    return pipe

def rebalance(context, data):  
    df = algo.pipeline_output('pipeline')  
    current_holdings = context.portfolio.positions  
    rule = 'trend_up or (not trend_up & index in @current_holdings)'  
    stocks_to_hold = df.sort_values('score', ascending=False).head(N).query(rule).index  

I have been trying to move the number of stocks to choose N into rebalance so that it can be calculated (scaled) on the fly, and I can't do this in make_pipeline. I would think the two are equivalent, and when I test multiple dates in Notebook they are (IE same 5 stocks, but different order). However, during backtest this makes a huge difference, and I can't figure out why.

@Jacob Champlin -- I can't spot what you're doing wrong. It looks correct. If you scroll up in this thread you can see a working example where I did precisely what you're trying to do (move all the filtering logic into pipeline), and it worked correctly.

@Viridian and Everyone

I am almost ashamed to admit this. When I copied the code into a new algorithm, I forgot to change the Initial Capital to 10k. I can't tell you how long I tried to debug the difference. Sorry if I wasted anyone's time.

Hi @Marc and @Vladimer,

Thanks so much for your contribution to this algo and for @Chris Cain for originally posting!

I have been backtesting the code from @Marc's post with the parameters @Vladimer posted and I cant get it to perform the way you guys could ? Is it something in the parameters or am i missing something else ?

I have only modified the quality stocks to only include "long term debit to equity"

#quality = roic + ltd_to_eq + value  
quality = ltd_to_eq  

Any help would be much appreciated

@Donald, have a look

If you use MaximizeAlpha for ordering you can push it a little higher.

@Nathan

I find it strange that target_weights are passed into MaximizeAlpha, aren't these just equal weighted....

stock_weight = 1.0 / context.TARGET_SECURITIES  

I would think that the momentum score would be the alpha? Not sure how this helps even though it looks like it does.

Because if you use MaximizeAlpha without any position concentration constraints, you end up holding only one stock at a time.

@Viridian

I get that you want position concentration constraints. My issue is I have never seening MaximizeAlpha used like this. I have always seen it used like:

opt.MaximizeAlpha(df["momentum"])  

The documentation states "Ideally, alphas should contain coefficients such that alphas[asset] is proportional to the expected return of asset for the time horizon over which the target portfolio will be held.". Equal weighting doesn't give you any information about which stocks have a higher/lower expected return. I would expect TargetWeights and MaximizeAlpha to be equal in this case. I guess I don't entirely know what MaximizeAlpha is doing under the hood, where is it finding Alpha?

I would avoid using MaximizeAlpha. I believe Quantopian's own conclusion is that the feature was ill conceived.

TargetWeights tries to order whatever weights you feed it.

MaximizeAlpha doesn't find alpha -- you feed it the alpha. It concentrates as much weight as the constraints allow in the positions with the highest alpha signal values. Equal-weighting is just a consequence of the constraints. For example, if you feed it 1000 ranked stocks, and use a 10% position concentration limit, it will pick construct a portfolio with 10 of the highest ranked out of those 1000. Because the portfolio consists of only the 10 best instead of all 1000 stocks, the returns will be higher, at the sacrifice of diversification.

Need help on understanding the pipeline filters. Looking latest version of the code:

    ltd_to_eq = ms.long_term_debt_equity_ratio.latest.rank(mask=universe) #, mask=universe)  
...
    quality = ltd_to_eq  
...
    top_quality = quality.top(context.TOP_ROE_QTY, mask=universe)  

So the algo is picking companies with largest long_term_debt_equity_ratio values, doesn't that mean highest leverage companies? Those are riskiest not highest quality companies. Am I missing something obvious?

That is right Lei. Algo is selecting companies with high leverage. One of the reason for performance could be that leverage filter is merged with momentum filter. Although there is high leverage, but momentum filter is making sure only those companies are selected which are in uptrend. and as you know, leverage multiplies the returns. Hence only those companies are selected which are able to apply leverage profitably.

Another reason could be, and this is just my theory, I could be completely wrong, money managers who can not take leverage because they are limited by their IPS (investment Policy Statement) , tends to take leverage indirectly, i.e. investing in companies with high leverage. When big money flows into stock, they tends to increase in value.

Lastly, the highest performance is during past few years. If you add above two factors in a bull market, you end up with something like this algo.

Again, these are just my reasoning and I could be wrong.

Thank you for the insight Nadeem, that makes sense! So the reason for this Algo's success is due to effectively increasing leverage during SPY up trend while move to TLT/IEF during SPY down trend.

@Lei

This algo has certainly morphed since I first posted it.

The original design was to buy companies with high quality - commonly known as the quality factor. The quality factor typically encompasses metrics such as high profitability (ROE, ROIC, Gross Profitability), LOW leverage and stability of earnings.

As an example, I simply used ROE to measure quality in the original algo.

This algo was NOT intended to buy highly leveraged companies.

In fact, in many ways that is the complete opposite of the original design.

Chris

Curious metamorphosis... and interesting discussion.

@Joakim, you added ltd_to_eq for the first time but in ascending order, which makes more sense to me if we think about plain quality. What do you think about the mutation and the mentioned reasonings?

@Vladimir, you changed the ascending order of the ltd_to_eq from the @Dan Whitnable version and removed the other quality factors. I'm curious, is the reasoning that has been explained above the one you had in mind?

Thanks in advance.

@Chris, in your first post, you requested:

“We’d love to see what you guys come up with.”

Well, the participants in this thread did bring a diverse set of modifications to your trading script and showed different outcomes.

It is understandable, changing the stock selection process, the timing of trades, the scheduled rebalancing time, the ranking method, the weighing mechanism, the trading method, all within the set limits of available resources would have its impact on the strategy's payoff matrix: F(t) = F_0 + Σ (H ∙ ΔP).

Moreover, any of those changes would take a different opportunity set. Each day, the market would present a slightly different list of stocks based on the selected criteria. And some of those could be picked only if there was available cash to make the trades at that time. Rebalancing each month would make that set of selectable stocks that much more different.

The strategy made sure that whatever stock entered the portfolio, its weight was already determined to be the same as every other trade taken. Your program made the bet size 5% of ongoing equity. Each stock entering the portfolio would be starting on the same basis. You had a lot more trade opportunities available than the few that were dropping off the list of selectables.

The slightest change would cascade through the strategy, technically, making it a slightly (or widely) different set of trades. And hence, quite different results as illustrated by the number of variations displayed.

So, yes, the strategy was transformed in many ways. Either enhancing it or even redesigning it to do more than the original setup. There is nothing wrong with that. Nonetheless, it did answer your request.

Can we, as a community, transform your trading strategy to do more? Well, the answer is: YES. And from the shown algos, you can now modify your own program to include whatever modifications you liked and thereby further enhance the outcome of your trading script H_a.

Hi Everyone,

I have recently started learning Python, and so far I have completed an online course/assessed the introductory content on Quantopian. I am using this algo as a case study to learn from, as I am a big believer of 'learning by doing'. At present I have copied the source code into a word document, and I am making comments/annotations to try and figure everything out (and get my bearings).

This is quite a big ask, but I would find it invaluable if someone would be kind enough to take an hour out of their week (possibly a Sunday) to video call me and run me through it (I am also happy to pay you for the help!). Or, if you are based in London, I would happily also buy you a coffee.

If anyone is able to help with this, I would really appreciate it. You can reach me on whatsapp via +44(0)7305 318 323.

Thanks in advance :)

For anyone interested in the more exotic versions of this algorithm, promising extravagant returns by pursuing highly counter intuitive fundamental filters, I highly recommend back testing the system outside of the Quantopian IDE.

I have spent some weeks learning Quantconnect, which offers similar data and an online IDE based in Python and various other programming languages. To say that the learning curve has been uphill would be an absurd understatement, but I followed Sisyphus both up the hill and down the other side.

The results to be obtained bear little resemblance to the fantastic returns promised on this thread.

Or not thus far in any event.

Perhaps by further torturing the data or the software I will be able to improve the theoretical back tested results, but this may be an occasion where the data and the software simply lie. Where back testing can produce the dangerous illusion of fabulous performance which exists only in the mind of the programmer.

The difference lies mainly in the universe. Here we have the Quantopian 3000 but little indication (?) as to how that is arrived at. In Quantconnect the universe selection is more or less DIY and selection of the more louche stocks cheers up the performance.

It has been a very valuable experience indeed.

Mr V Hawk
Thank you. This one puzzles me: "capped at 30% of equities allocated to any single sector". Not sure I see that one. In any event, I have ratcheted up the CAGR on Quantconnect to very reasonable levels but so far not to the dizzy heights we both reached here.

Frankly, good enough though and I think curve fitting played its part here - in my case particularly as regards the date of re-allocation.

Hello everyone,
I am new in this world of algorithms and, this work, has aroused a lot of interest.
However, is it possible to adapt the algorithm for, instead of month, quarter, four-month, semester or annual?
Thank you, so much.

@Zenothestoic Have you checked the quantconnect QC500 universe. The source code is below
Github QC500 Universe Selection Model

@Josh Genao
Yes, I have been through the model in detail and adapted it to create my own Russel 3000 lookalike. But thank you anyway. In an email just sent to a fellow user I expressed my opinion that one should not bother to try and replicate systems on the two Qs exactly. If the Quantopian 3000 picks different stocks than the QC 3000 and the system still works, then who cares. It shows it is robust.

Or rather "has been robust"!

I no longer use this site as I can't live or even paper trade (I have also moved to QuantConnect) so I can't share any code but I have also backtested some versions of this algorithm out of curiosity and wanted to share something back: I really recommend experimenting with piotroski f-score as the value filter. I think there are implementations of it on this site as well.

https://en.wikipedia.org/wiki/Piotroski_F-Score

Mikko M
What sort of CAGR have you achieved since 2003?

I have spent many weeks mastering Quantconnect and I have to say it is no great joy. It is unbelievably slow even for the simplest code using daily data and you are unable to run more than one test at a time. Sadly Quantopian is unlikely to reverse its recent decisions on paper trading or live trading but it would certainly make life easier if it would.

I have also found dramatic differences in results between similar systems on each platform but with online IDEs with limited logging output, the task of nailing the differences is a tough one.

I am tempted to download Lean and obtain the data for myself but I do not really relish running a server in the cloud or going through the hoops of connecting to Interactive brokers.

Swings and roundabouts but it is not ideal to rely on a third party provider for one's trading - they are all too capable of pulling the rug unexpectedly.

@Zenothestoic

NOTE: the results below are WITH quantconnects alphastreams slippage and commissions with 100k initial capital.
I used pure 6 month momentum (not discarding last days) as my personal view is that including more recent data is beneficial.

Leverage is 1.

Testing universe is basically "QC500" ie 500 most traded stocks with fundamental data.

With ROE filter, 20 equities and 6 month momentum total return since 1/2003 was 639% ( 12.393% CAGR, sharpe 0.727)
With f-score >= 6, 20 equities and 6 month momentum total return since 1/2003 was 1441% ( 17.322% CAGR, sharpe 0.838)

Mikko M

Thank you. Pretty healthy returns. I'm not yet clear how to apply the slippage models and I am currently using the IB Brokerage Model.

I have drafted models 1) using the alpha framework and 2) using the much simpler classic mode.

For my own purposes I can see no advantages whatsoever in using the fantastically complex alpha framework but of course this is necessary if you want to rent out your algos.

Incidentally, unlike here, there seems to be little discussion between traders on Quantconnect. Perhaps that all takes place on the Slack channel.....

Yes, I agree there is no mileage whatsoever in delaying the signal. Those who promote it seem to be misguided and or simply wrong.

Using the high debt model (on an ersatz Russell 3000 universe) with 5 stocks I get a maximum of 9000% return since 2003 (30% cagr) as opposed to 30,000% (38% cagr) here but that may be as a result of the Interactive Brokers slippage model. I have not yet investigated other quantities or other filters since it has taken me a few weeks to learn the software.

Also no leverage.

@Zenothestoic

maybe I missed it, but how do you actually set up your live trading work flow?
I was thinking to use backtrader.
Unfortunately Quantopian platform looks most advanced, second Zipline (which I’m using know) don’t allow for live trading.
It seems that backtrader has quite a large community as well.
I did not look at QuantConnect in detail.
What would be your recommendation?

Best
Carsten

I am going to start live trading using Quantconnect. Using Quantconnect data and Quantconnect servers + Quanconnect's software to access the IB API. I really do not want to have to re-invent the wheel.

Having done that I may download the Quantconnect software locally and subscribe to fundamental and price data so that i can submit orders manually if I have to and can work on connecting up to the IB API on my own via a cloud based server.

All of the latter steps will be a grind but as with Quantopian, who knows when Quantconnect may abandon live trading?

I had subscribed to a nice platform with good data and a very helpful guy who setup everything. Unfortunately he had to close the platform as his data provider doubled the cost and he did not had enough subscribers...all my work I spent developing my strategies was gone.

Now I don’t want to use a platform any more.
Your right the Quantconnect offer is nice - but....and a don’t like to re-invent as well, instead I just prefer to pay the appropriate fee/cost.

Actually I’m building my tools. I spend quite some time building a database and the programs to fill it and ingest them.(I searched on the internet but did not find any thing fitting to my requests, as well reaching out on freelancer did not help )

Next I’m building a tool to find alpha. I believe to start it takes a little longer but at the end it’s faster to do it more systematic.
I found the Alphalens tool quite promising to see if a single and later combined factor produces significant (statistical ) gains above the market over time. Specially how It changes during time.

Than I would plug it I to a full algo and backtest it.
Than start paper trading.
Than live.
Later semi or fully automated on a rented cloud server.

The problem I have with zipline, price only and no trading.
The advantage is the carry over of factors/strategies from Alphalens.

Would it be a lot work to carry over the strategies from Quantopian to Quantconnet?
Did you already try to set up their software on your PC.
With Zipline it’s a medium nightmare to get it running. Backtrader was super easy-like installing an app on the iPhone :)

Best
Carsten

I tried downloading Zipline locally 5 or 6 years ago and briefly got it running. Then it gave up on me. You are right - it was a nightmare. Perhaps they have improved the mechanics now but I have not tried lately.

Likewise I downloaded Lean a few years ago but at that stage they did not offer Python and I could not be bothered with C# which I have no familiarity with.

Quantconnect has taken me quite a few weeks to learn and I am still finding the necessity from time to time to dig into C#. But having cracked it, further systems will not take much effort. Or not nearly so much effort.

If I were you I would try and download Lean and see how it goes. I hope to get around to that this week.

I am not sure if you have heard or even tried the IBridgePy.com?

@Carsten
Are you sure that no live-trading by zip-line?

I have had some answers back from the folk at Quantconnect.

My algorithm as drafted will not run on their$20 pcm 512Mb server leaving me with the option of cutting back on the number of stocks or opting for the 2 x Free 1024Mb Live Server at $250 pcm

If you want to run with a Russell 3000 type universe you are going to need to spend $250 pcm. Possibly if you cut the number of stocks down to 2000 you might make do with the $20 server. But it would be a close call. 500 stocks and you probably have no problems.

Net result, many weeks spent learning Lean and some difficult choices to be faced. With sufficient confidence in the algo $250 a month would be perfectly reasonable depending on what capital you trade, but given the great disparity in backtested CAGR between identical systems on Quantopian and Quantconnect, the decision becomes more difficult.

Happily (in a sense at least) there proved to be nothing wrong with my coding.

Hi Zenothestoic

thanks for sharing.
That narrows for me the decision between a fork from zipline which allows live trading or backtrader....unfortunately backtrader does not have a kind of Alphalens. So for reasearch one can use zipline and for trading backtrader. Big bonus with backtrader is that it allows to use fundamental data.

Hi Carsten
It is just possible that I may be able to slim down the logic in my Quantconnect algo but looking back at what they offered in the past they have already cut down their offering once and so may presumably do so again. The other disadvantage (as with Quantopian) is that using an online IDE is like trying to code while looking through a microscope.

Also, TBH, Quantconnect is very slow and they do not allow multiple concurrent back tests.

"Allows you to use fundamental data" is one thing, subscribing to it as well as pricing data and mapping and matching up the two is a different kettle of fish.

Would you use daily data or minute data? If daily, the whole subscription would be quite cheap since fundamental data is offered quite cheaply by Quandl, and CSI Data have a well priced daily price offering.

All of this still leaves you to load everything on a server (presumably in the cloud) and connecting to the IB API.

And then of course some months or years after that you will discover whether or not the algo is a mere mirage destroyed by slippage in real trading.

No trying to be negative but merely pondering.

I think the route I will take initially is to try and slim down my algo further so as to run on the cheaper version of Quantconnect in the full realization that this may not provide a long term solution.

Hi Zenothestoic

so far I got one month of data, daily price and fundamental for 20 years from Quandl for one month subscription for 100$.
Actually I’m trying to build 3 Algos to run them together in a portfolio.
I installed zipline and use Alphalens which runs fine (after spending quite a time to get it properly installed). But now it’s fine.
I saw two options for live trading, but both are forks.
In parallel I will try backtrader, the concept is quite similar. Backtrader was easy to install and get a first demo algo running. Everything within half an hour.
Backtrader has an interface to IB Broker and allows for fundamental out of the box.
It’s to bad that this is not possible with zipline. It’s a very good testing and research environment. If one works a lot with this, one would probably once in a while submit some Algos to them. But if they only want people to participate in their contests and not permit them to trade privately, a lot of people will go somewhere else...

@Carsten,
What data did you get off of Quandl? I use Sharadar's bundle.
I have written loaders for Sharadar, Zacks, and CRSP (not from Quandl). They are available on my Github alpha-compiler repo.
If you are using Sharadar's data it does not include ETFs and the algos in this thread use SPY and IEF, but you can download that data for free.
I also have code to load and use fundamental data, and sector codes. The documentation is not great but others have gotten it to work.

Just thought I'd let you know that code is out there.

Backtrader has an interface to IB Broker and allows for fundamental out of the box.

Good news indeed, I will take a look. Sounds better to ditch Zipline if it does not have this. I have not downloaded Lean - I wonder whether Lean has the connection built in?

Peter, thanks for that I will take a look. Magic - Sharadar includes price data as well.

The market is so disjointed and disparate for people like us. I can't help wondering whether a group should not try and set up some joint effort. There are so many people doing essentially the same thing and it is simply not realistic to rely in the longer term on third parties whose agenda keeps changing.

The big question is, as ever, monetisation.

The choices are:

  1. expect to make money trading - quite a challenge with ever changing markets but some form of momentum trading on stocks seems to have withstood the test of time.
  2. flog something to someone.

On 2, I wonder how the likes of Q and Q are actually making out? I can understand selling trading products to retail (fraught with regulatory difficulties as it is) but I wonder just how easy these people are finding it selling to hedge funds, most of whom presumably have teams of in-house nerds?

It's an odd place the internet - teeming with people but as to how many of them actually make a brass farthing is another matter.

Hi Peter,

I’m using the Sharadar Bundle as well. It’s the combined price and fundamental.
But it does not include ETFs, and due to that reason as well not the Index SPY and so on.
The index I load from yahoo.

I spend quite a while to build a security databases with several tables.
I build as well the codes to load from Quandl and yahoo and store it at that database. As well I build some code to ingest it to zipline.

If you want you can have the code or maybe I host this on github. I was searching quite a while for a database and did not found anything for my needs. In this way other people could use it and don’t need to waste their time - and maybe some builds code from other vendors for the database which I would use in the future :)

Are you using Zipline as well?

Yes, I use Zipline. I like Sharadar as it has OHLCV, fundamentals, and sector codes, and great coverage of the three.

The strategies I trade rebalance 1x/month so I don't maintain a webserver. You can just run a 1 day backtest in Zipline and write your positions to a text file then use some other code to place your trades or do it by hand. I like to take a look at every trade just as a double check, and to see if there is something like a short selling restriction that would prevent the orders from filling.

@ Peter, Zenothestoic

what about to start a joint/open public github repro.
I don’t have the time to service something like this, but I’m interesting to contribute as I believe exactly what you (Zenothestoic) said. A lot of people are doing the same stuff and if you have to build on something from an organization it’s tricky as their interests are changing. It’s better to trust a community.

For me at the moment zipline looks quite advanced and I like all the small tools for displaying and special Alphalens. It’s a lot of work to build something like this.

Actually I’m trying to run zipline with pipeline, very similar to the online version.
This gives me the advantages to use first Alphalens tear sheets to see if the alpha is significant over time.
Took me a while to get the tear sheet for Alphalens running offline and than use a copy paste solution to integrate it into zipline.

That’s why I believe I would be a great idea to contribute all this stuff to a central place.
If we don’t get enough traffic to that place it will not grow.

Some ideas?

I'm definitely up for something but I would prefer to see a definite profit potential.

I modified to take the top 3 companies and also a few other minor tweaks.

@Robert Curzon

Are you aware that leverage is around 4.6x?

Putting that aside, I came independently to the same original algorithm a month ago (haven't been on Quantopian for a few months) and my Backtrader implementation also works better with 3-5 assets. In order to avoid influencing results by picking a "convenient" rebalance date, I stick to the Monday in the 2nd week of the month. By doing so, every month the rebalancing takes place on a different date.
I'm working on implementing the suggestion somebody mentioned here of using the Piotroski score to improve results. I also believe that selecting the stocks with the best momentum first and THEN check for quality could deliver better results.

cracy! :-)

I am glad that people are implementing the algo on different platforms using different data and different universes.

It confirms the basic premise but it does wean you away from the idea that any particular parameters are magical.

Here is an alternate Pipeline version of the trend_up signal. When I ported the above Pipeline code to Zipline it had some problems, and I wrote this version. Functionally it is the same, but it is cleaner as you can take out the TrendUp logic and hide it elsewhere.

class TrendUp(CustomFilter):  
    inputs = [USEquityPricing.close]  
    window_length = TF_LOOKBACK  
    def __init__(self):  
        self.spy_sid = symbol('SPY').sid  
    def compute(self, today, assets, out, close):  
        spy_data = close[:, assets.get_loc(self.spy_sid)] # get only SPY data  
        ma_slow = spy_data.mean()  
        ma_fast = spy_data[-TF_CURRENT_LOOKBACK:].mean()  
        out[:] = ma_fast > ma_slow  

You can utilize this in pipeline by:

    pipe = Pipeline(columns={  
                        'trend_up': TrendUp(),  
                        'top_quality_momentum': top_quality_momentum,  
                        },  
                    screen=top_quality_momentum  
                   )  

What about instead of SMA crossover using something more mainstream like negative GDP growth rate or yield curve inversion as a recession indicators/predictors?

@Robert Curzon

Just by removing the 0% brokerage and slippage. The performance is less than halved.

Looks like there is a either a bug in this algorithm or your strategy will make your broker richer than yourself. On top of that you still have to payback that high amount of leverage you are taking.

@Viridian that won't work. Negative GDP can show up 3 months after the stock market has dropped 10-30%. The stock market is the leading indicator of a recession.

@Shaun Murphy
You are absolutely right, slippage is a problem, however I do not see leverage as a problem for the versions I used.

There are many versions including the original that do not use leverage.

I tested two versions of this algorithm with Zipline on my laptop using the SHARADAR data, and the alpha-compiler code linked above.
The versions I tested were Chris's Cain's original version and Marc's high-balance sheet-leverage, weekly rebalancing version (internally I call this Marc's turnt up version.) Both algorithms maintain a gross leverage of 1.0. I used 10 years of data, so I ran backtests from 2010-7-1 to 2020-2-18. (I needed a little buffer for the momentum lookback data.)

The Chris Cain version backtested similar on my laptop as the Quantopian data. On Quantopian it had a CAGR of 16.4%, and on my setup it had a CAGR of 16.2%.

Marc's Turnt up version was not so accurate. Using debt to equity (hold on) I was able to get 27% CAGR compared to the 43% CAGR with the Quantopian data. The SHARADAR fundamental data only has 91 fields and long_term_debt_to_equity is not one of them (that is the field from Morningstar that Marc used for leverage). I substituted debt/equity for this to get the 27%. Here is where I need some help. Do you think I can calculate something similar to long_term_debt_to_equity using the SHARADAR fundamental data?

  1. The Morningstar long_term_debt_to_equity is defined as long_term_debt/shareholders_equity. I don't have a field for preferred stock in the SHARADAR data, so I cannot calculate shareholders_equity from common equity. Am I missing something here?

  2. Long_term_debt is defined by MS as debt with maturities longer than one year minus the stuff that is going to be paid off within the year. I don't see anything like that from SHARADAR only non-current debt which appears to be debt with maturities longer than one year. AM I missing something here?

Thank you for the help.

There is definitely a bug in Robert Curzon code:

context.top_n_relative_momentum_to_buy = 10 # Number to buy  
context.Target_securities_to_buy = 3.0



    for x in top_n_by_momentum.index:  
        if x not in context.portfolio.positions and context.TF_filter==True:  
            order_target_percent(x, (1.0 / context.Target_securities_to_buy))  

It is buying 10 stocks with weight 33% each
which create artificial leverage 3.33

Hi

i'm trying to compute the momentum using the slope and R2 idea from Andreas Clenow.
He uses this based on

def my_rebalance(context, data):  
.....
hist = data.history(context.security_list, "close", hist_window, "1d")  
momentum = hist.apply(slope)  

I would like to use it inside the make pipeline

def make_pipeline(universe, context):  
.....
momentum = Slope(inputs=[USEquityPricing.close],  
                        window_length=200,  
                        mask=universe)

return Pipeline(columns={  
            'momentum' : momentum.top(10)  
                             }, screen=(universe) )      

i defined a class

class Slope(CustomFactor):  
    """  
    Input:  Price time series.  
    Output: Annualized exponential regression slope,  
            multiplied by the R2  
    """  
    inputs = [USEquityPricing.close]  
    window_safe = True

    def compute(self, today, assets, out, close):  
        x = np.arange(len(close))  
        log_ts = np.log(close)  
        # Calculate regression values  
        slope, intercept, r_value, p_value, std_err = stats.linregress(x, log_ts)  
        # Annualize percent  
        annualized_slope = (np.power(np.exp(slope), 252) - 1) * 100  
        #Adjust for fitness  
        out[:] = annualized_slope * (r_value ** 2)  

But its not working at all. Looks like I have to change the close some how to a Dataframe....

Could someone help me on this?

Thankx

@Cartsen

  1. Add window_length=200 to your class definition. In a CustomFactor the close price is a NumPy 2-D array, unless you have window length=1. Now close will have shape (200, number_of_assets).

  2. Scipy Stats.linregress builds one model at a time. You are trying to build a model for each asset, so you want to do a for loop (not the best in terms of speed).

This does what you are trying to do, but it is not the fastest thing in the world, merry Christmas.

class Slope(CustomFactor):  
    """  
    Input: Price time series.  
    Output: Annualized exponential regression slope,  
    multiplied by the R2  
    """  
    inputs = [USEquityPricing.close]  
    window_safe = True  
    window_length=200  
    def compute(self, today, assets, out, close):  
        out_pre = np.zeros(close.shape[1])  
        x = np.arange(close.shape[0])  
        for i in range(close.shape[1]):  # loop over all the assets  
            log_ts = np.log(close[:, i])   # 200x1  
            # # Calculate regression values  
            slope, _, r_value, _, _ = stats.linregress(x, log_ts)  
            # # Annualize percent  
            annualized_slope = (np.power(np.exp(slope), 252) - 1) * 100  
            # #Adjust for fitness  
            out_pre[i] = annualized_slope * (r_value ** 2)  
        out[:] = out_pre  

@Peter Harrington.

Firstly I was referring to Roberts code not yours.

Also I noticed that yahoo financials has long-term debt and total assets on their balance sheets.

To actually calculate a companies true value by long-term debt to equity, you would calculate:

long-term debt / (long-term debt - total assets)

These are commonly available from the balance sheets and can be obtained for free just getting financials data rather then fundamentals.

The more I research fundamentals the more I find most of them not being useful at all. They use share prices or number of shares as a way to compare companies way to often. You would not compare companies based on the number of shares they have issued or their share price. So why do fundamentals include these as a metric to compare companies?

If its long term debt per share and total assets per share than it can makes sense as they are using a constant rather than a variable. But to use something like sales per share which is calculated based on Sales / Average Diluted Shares Outstanding. Not all companies have the same number average diluted shares outstanding, right?

A better metric would be Sales/EV. For the same reason you can not compare companies based on EBITDA unless you use EV/EBITDA.

It seems as though through an obsession to find ways to calculate fundamentals not commonly used by other share trading firms, they have only found ways to outsmart themselves.

@Peter

I just went to you GitHub repro to find out how to include the fundamental data to zipline, but I do not understand how it is done.
It looks like it was not ingested, instead you read it during the zipline run?
Could it be included during ingestion process?
I'm using a SQL database to store everything.
would it be possible to ingest some fundamentals together with the price data?
I just stored the files for the SQL database and the importing and ingesting code at my GitHub repro, https://github.com/carstenf/Security-Master
Just to let you know if you are interesting.

As well I found on your GitHub the Basic_Pipeline_Usage.ipynb.
I'm using something similar to run Alphalens with make_pipeline on my MacBook (as well on repro), which is working, but way more cluttered.
The zipline library is missing the get get_pricing and the possibility to load individual bundles.
To get it running I had to build some work around which makes the tear sheet a bit cluttered.
You are using a very elegant way to store theses thing in a library.
How do I instal that library in my Zipline? I could do similar thing for my tear sheet

And the last question, is there a way to run python/zipline with more cores? just have a MacBook Pro with 2, but it would already double the speed.
(I find it as well strange that python only uses around 30% cpu, which is only 60% of the one core)

Thank you
Carsten

@Carsten, I will send you a PM and we can keep this thread to the discussion of quality companies in an uptrend.

@Peter and Carsten,

I would be happy to help out. I already have zipline working and have started looking into Backtrader.

@Shaun

thankx, are you using Zipline with fundamentals?

@Shaun Murphy, do you have the fundamentals and sector codes working in Zipline?
If not a good place to discuss this would be the Zipline Google Group, or you can just PM me.

Once you got that working, I could use some help figuring out why the higher return versions of this algo perform worse on the Sharadar data than with the Q data. Right now, I get 23% CAGR with my Zipline setup, but on Quantopian the CAGR is over 40%. I implemented a filter similar to the Q3000US and that hurt performance. What hurt performance was removing trading on the OTC market. (This is certainly good to know, as I would avoid OTC positions.) It could be that the Sharadar data is just missing some tickers. That is what I'm looking into now.

I'm not sure where is the best place to discuss a specific implementation of this algorithm. Is it here, is it on the Zipline Google Group? I feel both of those are not perfect.

@Peter

are you aware that some companies publish reports in two different ways? one ist the standard accounting way and then they use their version.
The more expensive data providers are giving you both, Sharadar uses standard.
You might compare the pure inputs from Morningstar and Sharadar if it's the same value for the factors.
(For EPS you will for sure find different numbers.I was trying to replicate the William O'Neil strategy (I use the IBD service) with Matlab and got a different ranking for TEAM, CYBR were the on for AAPL was spot on.)

Hi

got the momentum based on slope and R2 running. Unfortunately with loops its way too slow (as expected) to optimize something.
From first try outs, it looks it produces a less volatile momentum curve, but with a bit less return as well.

To speed up the function, I tried to parallelize the code in this way, but I looks there is an issue with the linregress function.

def mom(x,close):  
    log_ts = np.log(close)   # 200x1  
    # # Calculate regression values  
    slope, _, r_value, _, _ = stats.linregress(x, log_ts)  
    # # Annualize percent  
    annualized_slope = (np.power(np.exp(slope), 252) - 1) * 100  
    # #Adjust for fitness  
    return annualized_slope * (r_value ** 2) 

class Slope(CustomFactor):  
    """  
    Input: Price time series.  
    Output: Annualized exponential regression slope,  
    multiplied by the R2  
    """  
    inputs = [USEquityPricing.close]  
    window_safe = True  
    window_length=200  
    def compute(self, today, assets, out, close):  
        out_pre = np.zeros(close.shape[1])  
        x = np.zeros(close.shape[:])  
        for i in range(close.shape[1]):  
            x[:,i] = np.arange(close.shape[0])  
        out[:] = mom(x,close)

you can call the function from

def make_pipeline(universe):

    momentum = Slope(inputs=[USEquityPricing.close], window_length=200, mask=mom_filter)  
    return Pipeline(columns={  
                              'momentum': momentum,  
                          }, screen=(universe) )

Someone has an idea were to look?

Apart from that idea, does someone has another idea for a momentum function which produces less volatile outcomes?
(The classical return function sometime catches stocks with sudden gap ups which than later very probably crash)

Thanks

@Peter and Carsten,

Today I was working on fixing a scikit-learn numpy version issue on my zipline/backtrader environment. Which is all up and running now.

I do not have fundamental data yet but I do like the idea of Quandl Sharadar SFA bundle for $99 a month. Even if I sign up for a month to ingest and save all the data and then cancel until I am ready to use current data.

I have started looking into adding the fundamental and sector data to zipline from the zipline google group as well as working a stock selection criteria similar to Q3000US. These are the thing's I can start to work on now.

It looks like you need eventvester mergers and acquisitions data to mimic the Q3000US. This looks expansive as you have to enquire. They don't provide their prices online. I wander if there is a way to search stocktwits for companies announced for mergers and acquisitions and remove them from trading if they are.

I agree with this not being the best place to chat about this. Maybe we could start a group chat somewhere, so we can take this off quantopian. We can start out own google group or something similar.

@Shaun
You get the company action data with the Sharadar bundle. It includes adjustments for merger, acquisition and splits

@Carsten Check out this post for how to build a faster beta. You may need to modify it a little as you want to have a static left side and this uses a static right side.

LMK if you need help putting it in a custom factor.

I looked at the raw fundamental values from the two data providers and they are not even close. I looked at eight values of debt to equity and two of them even had the wrong sign.

@Peter, thanks, just send you a PM

Regarding the raw fundamentals.
I guess all provides have wrong numbers, just a question how much do they affect the outcome.
if you would create an Alphalens tear sheet and compare raw factors one should see how each of them deviate...
I just hope it does not explain the difference between 23% and 40%, otherwise we can stop here and think about ideas for casinos.... :)

My suspicion would be that 40% is an illusion for a number of reasons, some of which have been stated.

For anyone interested in the more exotic versions of this algorithm, promising extravagant returns by pursuing highly counter intuitive
fundamental filters, I highly recommend back testing the system
outside of the Quantopian IDE.

I have spent some weeks learning Quantconnect, which offers similar
data and an online IDE based in Python and various other programming
languages. To say that the learning curve has been uphill would be an
absurd understatement, but I followed Sisyphus both up the hill and
down the other side.

The results to be obtained bear little resemblance to the fantastic
returns promised on this thread.

Or not thus far in any event.

Perhaps by further torturing the data or the software I will be able
to improve the theoretical back tested results, but this may be an
occasion where the data and the software simply lie. Where back
testing can produce the dangerous illusion of fabulous performance
which exists only in the mind of the programmer.

The difference lies mainly in the universe. Here we have the
Quantopian 3000 but little indication (?) as to how that is arrived
at. In Quantconnect the universe selection is more or less DIY and
selection of the more louche stocks cheers up the performance.

It has been a very valuable experience indeed.

This is interesting. Therefore, I guess the safest algorithms to use by retail investors are the ones with the entire stocks as the universe, and let the algo sort the universe on its own? This is perhaps achievable by writing a custom Filter for the universe selection.

How would I use this to live trade?

Hello,
I am relatively new to Quantopian and have been trying to make a simpler form of this code. I'm currently getting a runtime error on line 26 with security_list. Any chance someone can help me out?

The route I am taking is to secure my own fundamental and price data and to back test the system for myself on my own server using basic Python/Pandas/Numpy.

The limitations of these online IDEs are severe- very severe indeed. You need to sort the data yourself, make your own different and unbiased universes. You need to be able to inspect every aspect of every line of data. You need to be able to see the portfolio and its progress through time. You need to be able to look at volumes, dividends; everything. Using an online IDE may be fine for an initial exploration - and indeed useful. I am gratified to have come across this simple idea and would not have done so other than through this thread.

But to test whether the idea has legs, it is ludicrous in my view to rely on one on-line back tester where you can not download and analyse the data.

I have downloaded zipline successfully but the frustrations are enormous right from the go. Outstanding issues make it pointless unless you are prepared to spend months if not years sorting them out.

The route I have taken is to start from scratch and use my reasonable competence with Python and its libraries to back test for myself from scratch. I do not need the frustrations of somebody else's software, on or off line.

The Python community is so huge that every question will have been answered on StackExchange or elsewhere. Not so for something like Zipline and its myriad of outstanding issues.

Just a personal view of course but having spent some weeks on both Quantopian and then the Quantconnect IDE with vastly differing results, for me the only sensible solution is to go it alone.

I believe the basic premise of this system "works". I believe that every source of data (whether price or fundamental) will product different or very different results. Same for each back tester. But that basically the two sort process (fundamental and momentum) can produce above market returns. On an absolute basis anyway.

Call me obsessive and anal but I believe the best course of action is to return to basics and do it all yourself. And given the trade frequency, there really is no need to automate the orders, especially given the facility at IB to upload a CSV for the rebalancing.

For anyone familiar enough with Python and its dependencies, to back test one simple system such as this from scratch is a walk in the park and a satisying and instructive one at that.

Regarding my progress on this, I tested the original version of this algorithm and the results were similar (comparing my own setup and Quantopian's) over the last 10 years.

I was testing one of the higher return versions, and I realized the Sharadar fundamental data did not have the information needed. Namely there is no way to calculate common equity. I will need to find an alternate source. It would be ideal to get Morningstar's data and I am waiting to hear back from them. Zacks's also has a fundamental dataset which could be used. It is $1200/year on Quandl and you get a 10-years worth of the data.

If anyone has any other ideas on where to get fundamental data: I am all ears. I know some have suggested pulling it from Morninstar's website for free, and that is well documented, but it is limited to five quarters.

@Peter

i was not able so far to test the fundamental data, i'm still trying to find a way to use them with zipline...but if I run in the same problem then you, and I guess I will, I would look for some historically top notch data. lets say 20years top notch 1998 to 2018 should be great.
its just a question where to get them. I would look to university...most of them get data from the top notch data vendors for research purpose.
Maybe you know somebody there. I was already playing with the idea to pay 100€ enrollment fee for one semester (at least in Europe its inexpensive) and get the data....

if you have the top data, you still need to clean them.
I read a lot that the cleaning process is quite an important and overlooked issue.
You would need to store the original and then generate the improved one.

If the cleaned historical data would perform as you expect, you could test them against cheaper data vendor data.
Creating a kind of benchmark would be great (including an algo)
At the end, I don't think that my algo will justify to pay something like 100k $/year for data (at least that was the price I understand you need to pay for the top notch ones)

Hi @Peter,

I’m building a docker file to set up the environment, will upload it to github if I succeed. You mentioned you use Sharadar for fundamentals but where are you getting the base universe and price data?

Thanks in advance.

@Zenothestoic

I just tried to build a backtester by my own in Matlab. interesting experience, you learn a lot...
Bevor you spend a lot of effort and time into that endeavor, just ask yourself:
Why do you believe, that your homemade one does not have build in mistakes and performed better than zipline?
I answered that question to myself and changed language and now moved to zipline.
The biggest advantages with open code is that its review and tested by a huge amount of people for free.

Did you got zipline running so far? For me it was a bigger pain than a root canal treatment, but now its fine, like the root canal :)

Yes, I got zipline working on my laptop. But I did not, have not and do not intend to integrate it with fundamental data.

I have done much backtesting using Python Pandas and so on and I get a great deal more satisfaction from doing that than fiddling around trying to understand other peoples software.

In particular I feel burnt and pissed off having spent weeks learning Lean only to discover Quantconnect could not handle my system at a price I am comfortable to pay.

I do not wish to mess around with bundles, ingestion or any of the rest of the shenanigans when I have done such work so often with all sorts of data (including the complexities of options) . Using simple dataframes. I have done the work before, using CSI data, of replicating the Russell 3000 for instance.

Python and Pandas are open sourced code and used by a far larger number of people than zipline. I have been coding systems for 20 years and simply don't need anything other than the basic tools.

But of course it is a personal choice.

Actually, in any event, I am in contact with somebody who has set up the entire zipline thing in the cloud with the connection to IB and who is also setting up fundamental data. That person is somebody who has been in software as a business all his life and yet it has taken him (I think he said) years to get the setup to this stage.

If I wish to trade automatically I shall do so through his set up.

As an irrelevant and irreverent aside, my suspicion is that these algos will probably disappoint in time, particularly the very high performing ones. It is all based on prediction from past events. Which is not so very foolproof as countless failed hedge fund managers have managed to prove so admirably over the years.

My suspicion is that the most successful hedge funds may be the very short term operators who rely on things like trade flow. I am assuming (but may of course be entirely wrong) that funds such as Renaisance/ Medallion use rather more reliable methods of "forecasting" than most of us have been using here. I am assuming that "Flash Boys" probably paints a fairly accurate picture as to why and how those people make so much money. But who knows.

It is particularly noteworthy that their equity fund has met with limited success by comparison to Medallion (their HFT effort).

Prediction over the very short term may be a better bet than forecasting (by way of example) that a small clutch of very highly indebted companies will shoot the lights out over the next twenty years as they (appear!) to have done over the past twenty.

Anyway please excuse my cynicism.

Incidentally on Sharadar one does indeed need to be careful to make sure one is comparing apples to apples. I have not really done too much work on it so far, but I was a little alarmed that JPMorgan's DER was said to be over 11 in the sample data provided for 2018.

Here is Mornigstars definition:

Debt-to-Equity Ratio The debt/equity ratio is calculated by dividing a
company's long-term debt by total shareholders' equity. It measures
how much of a company is financed by its debtholders compared with its
owners. A company with a lot of debt will have a very high debt/equity
ratio, while one with little debt will have a low debt/equity ratio.
Assuming everything else is identical, companies with lower
debt/equity ratios are less risky than those with higher such ratios.

Here is Sharadar's definition:

Measures the ratio between [Liabilities] and [Equity].

Not the same thing at all. Whether Sharadar contains the same definition of "long term debt" as Morningstar I have not yet troubled to ascertain. Nor have I looked at "Equity".

This is no grumble at either party - merely an observation as to my surprise.

I believe the "correct" figure for JPMs DER in 2018 was more like 1.1

Anyway, just saying......

@ Zenothestoic

no worries mate :)

Interesting comment your last paragraph, the one bevor the cynicism....could be
I was thinking, that mean reverting, momentum and factors should always work. Just a matter to understand in which regime we are at the moment.
And if you have less money you can jump boot much faster which is the big advantage we have.
If you have a lot, you need the latest trick and that probably fades away faster.

I'm quite new to python, but start liking it..
As I just have limited time, I just would like to invest it were it matters.
By the way, do you think it would pay off to use Quantpedia to get building blocks?
Im looking towards several diversified strategies, around 10 different ones.
Im shooting for 30% with low volatility.

The person you are in contact did not publish that on a blog or at GitHub?

"Just a matter to understand in which regime we are at the moment."

Many have tried. Few have succeeded in the long term.

In particular look at the amusing history of JW Henry and Bill Dunn. The wonders of commodity trend following in the 1980s became a very different game in recent decades.

The person you are in contact did not publish that on a blog or at GitHub?

I think he will make an announcement but I would not wish to comment at this stage. I think it is still a bit of a work in progress. Or at least (as I understand it) not wholly ready to offer out as reliable "Software as a Service".

@Zenothestoic

ok, that will be than another monthly payed service...
I would prefer a public shared software, because of the peer review advantage (even if I need to pay something, but than it has no public review and than I don't trust it...its an oxymoron)

regarding the other comment, what kind of strategies you are using? or are these totally creative off the track new stuff?
Than you would need a lot of time for development.

I'm not using any strategies currently. The last serious effort was trend following on futures. I successfully played the Vix a year or so ago.

To make decent money (imho) you either need stable assets under management or a game where the dice are heavily loaded in your favour.

I have been lucky enough to benefit from the latter on occasions in the past but am currently bereft of ideas.

I signed up for the quandle sharadar SFA bundle, and I have a serious issue with the SEP data. When I download the entire CSV table. I only get 3 months of data dating back to the 2019/11/29. If I call from the API I hit a limit of 10,000 rows of data. With 16,000 stocks sharadar provide data for I can not even get a singles days worth of data without hitting this limit.

Of course zipline ingest does not work for sharadar SEP data.

The only thing I can do is look at individual ticker data to get periods dating back 20 years and exporting the .csv individually, however this would require me to source my own universe of stocks before I even export the data. Then I would have to find a way to bundle and ingest this data. Which will create a lot of work.

@Carsten and @Peter, when you both got the Sharadar SEP data did these limits apply?

The alternative is to sign up for SEP separately here https://www.quandl.com/databases/SEP/pricing/plans for $399 for a year. But there is no guarantee that I would not hit these same limits.

I have contacted Quandl to request access to the data I purchased. I'll wait to see what they say.

@Shaun Murphy

yes, happened to me in the first run as well...
I use different calls for different amount of data, just see my repo https://github.com/carstenf/Security-Master
just check quandl_to_db.py

Here is one more version of the algo that longs or shorts depending on the uptrend. It has some of @Vladimir 's tuning but keeps @Chris fundamentals regarding quality and momentum, I find those easier to explain (even results are less impressive).
In this version, when the market is not in uptrend, I short bad quality companies with no momentum.

Hi Marc,

For your bottom strategy did you realize that you are selecting the bottom of the top 60 ROE stocks, not the bottom of the bottom of the ROE Q3000US. Also they are still being selected from the a pool of stocks that meet the momentum criteria. You would need to create a bottom momentum SMA 200 > SMA 50 and a context for bottom 60 ROE for your bottom strategy.

Trying to short good quality companies does not make much sense.

Hi @Shaun,

For your bottom strategy did you realize that you are selecting the bottom of the top 60 ROE stocks

Looking at the code I would say I'm getting the 60 low quality stocks of my universe

    context.TARGET_SECURITIES = 5  
    context.TOP_ROE_QTY = 60  
    universe = Q3000US()  
    bottom_quality = quality.bottom(context.TOP_ROE_QTY, mask=universe)  
    bottom_quality_momentum = momentum.bottom(context.TARGET_SECURITIES, mask=bottom_quality)  

Also they are still being selected from the a pool of stocks that meet the momentum criteria.

trend_up is set based on the SPY. I'm using it to decide if I have to long or short (unless I'm doing something wrong). To choose the stocks I filter the 5 with worse momentum from the 60 with lowest quality chosen before

    context.TARGET_SECURITIES = 5  
    context.TOP_ROE_QTY = 60  
    universe = Q3000US()  
    bottom_quality = quality.bottom(context.TOP_ROE_QTY, mask=universe)  
    bottom_quality_momentum = momentum.bottom(context.TARGET_SECURITIES, mask=bottom_quality)  
    if context.trend_up:  
        total_weights = context.stock_long_weights  
    else:  
        total_weights = context.stock_short_weights  

Trying to short good quality companies does not make much sense.

Agreed. My code might be buggy but that is not what I'm trying to do.

Thanks!

@Zenothestoic

Have looked at https://blueshift.quantinsti.com/ (currently supports Alpaca) and they are working on getting Interactive Brokers as well. However, they lack fundamental data at the moment.

I have spent quite a few weeks/month on the uphill curve on QC/Lean but the backtests seem very slow, especially after adding more stocks to the 'universe'.

I am exploring setting up pipeline_live, with polygon fundamental data, but would really prefer using a server.

Hi,

New to Quantopion, trying to learn and understand, awesome thread, thanks everyone, having a basic question, how can we replace bonds with Gold, I tried with Gold ETF and Gold Spot symbols which are giving me error, tried searching in forum, one Gold symbol used seems to be deprecated,

Thanks before hand.

symbol('GLD') Keep in mind it will not work on a backtest starting prior to Dec 2004.

Thanks Viridian, Bond results were better than Gold, when calculated

Is this sustainable in a live environment?
If not, what needs to be improved for it to be 'live' ready.

@Matthew, can you please clarify what you mean by "sustainable" in this question?

Is this sustainable in a live environment?

If you mean, can this be traded live, absolutely.
If you mean are the returns shown in this backtest likely to be repeated going forward, that is another question.

@Peter Harrington

Hey Peter,

Sorry for the confusion. I guess by sustainable I mean reliable. I am looking to throw some money into it ($1000) and let it run for 5+ years, but am scared by the level of returns. The leverage spikes to 2 a few times and I would not like to be trading on margin. I have seen many great algorithms here, but I lack some knowledge to decipher which algos would be practical and risk limited enough to take to a live environment.

**Also, if anyone has experience with TD Ameritrade TOS or Ameritrade developers who has taken any algorithms live and can help me through that process would be great.

Thanks for the help!

@Matthew There are a number of risks. Some of them are discussed above in this thread.

People worked on this strategy without employing a hold-out period. This makes it at particular risk of being "overfit." Especially if you consider that many of the variations of this strategy posted above only hold 5-10 stocks at a time and turnover is very low, it's not very many datapoints. In other words, it could be latching on to a spurious correlation, which ultimately might not be predictive. The prudent thing to do would be to wait 6 months to a year and see how things are holding up.

Another risk is that market regime changes and what once worked stops working.

Both of these issues affect particularly the trend filter, which has been tuned to past events. We know every stock market crash/recession is different. So there's no reason to believe that what worked in 2008 will be effective in 2020.

Regarding the leverage spiking to 2, I believe I fixed that issue in the versions that I posted. I believe that was an artifact of the original algo's ordering logic.

Regarding executing this algorithm live, I don't see any reason to automate it. It holds few positions, turnover is low, and it exhibits negligible alpha decay. Just run a backtest through yesterday and check what the positions are at the end of the backtest and order them manually. Rebalance monthly -- it'll take you 15 minutes.

@Viridian

Thanks for the insights! I read the above the posts, but appreciate you making sense of it.

Hi, @viridian, thanks for the inputs, just to add like Matthew I'm a new to quant, but not so to stockmarkets.. I'm also looking at integrating IB accounts to algo trading, though I can understand code, but havent coded, learning..

I'm wanting to learn or be guided though anyone from above if they can spend some time, like a talk for 10-20min, modify some ideas, help in integrating the account and start trading... my gmail is ajmal.doc @, kindly if anyone is interested please contact, I'm willing to pay a reasonable amount for the services.

@viridian.. Like you have said regarding leverage is fixed in your code, can u advice on the below,

  1. why cant we have a holdout period integrated - rather a cash out period with a if condition, cash out or keep invested, like when trend positive year, we take out - fixed percentage like 10% of profits or average market returns of SPY of that positive year, a fixed time to cash out or even to cash out in optimized time frame within 6months like that...

  2. As brokers do, giving higher leverage in positive times and stopping in negative periods, can we not have higher leverage during uptrends and reduce during negative times.

  3. from the discussion it looks like u dont believe much in stop losses, looking at the data and back testing they seem to help less, strange it is though for me coming from a technical analysis background.. but a stop loss to not lose out of profits clubbed with cashing out regularly would be a good addition.

  4. Spare my lesser knowledge, cant the optimize function not be used to - optimze time frame to lookback for trend or optimized crossover or optimized stock weightage or number of stocks, because each also would infact require a different set..

  5. What u said is true that what worked earlier might not work today, but the best we do is learn from past and also have ahands on approach to algo to intervene or override when needed.

  6. Why does the code need to be invested at anytime in a bond - is it as a alternative to having cash to purchase or hats the logic behind such in the code.

  7. Going long and short simultaneously should generate more profits, but it doesnt, has anyone tested it out with the above code?

Mohamed Ajmal
You can increase the sample size by trading more frequently. I have tested it weekly and daily rather than monthly. Mr V Hawk quite rightly pointed out the dangers of trading 5 stocks. To be frank that probably amounts to quite a high risk gamble. 20 to 50 would be safer from the point of view of single stock risk.

The factor you use can also be an issue. Given the pandemic, companies with high debt may be considered a particular risk - how can they pay their interest charges if their shops and factories are closed and they have no income coming in?

The particular 5 stock High Debt algo I am considering is in the middle of a 40% drawdown thanks to this virus. The market is only down 20% from its recent high.

As to the canary in the cage - take a look at 1987. I was a stockbroker in Hong Kong during that crash and it was so swift and dramatic that few if any indicator would have saved you. Luckily it was a V shaped drawdown with a swift recovery.

If you are not prepared for high risk then look to trade a combination of good fundamental factors not a single high risk factor with this algo and look to hold 20 stocks upwards. Frankly your risk then becomes more or less market risk with a momentum slant.

The only surefire way to make huge profits is either to manage money at least competently. Or find an unorthodox and possibly immoral edge. Read Flash Boys for example by Michael Lewis. Watch Wolf of Wall Street and Wall Street.

Nothing is secure or for sure. Few people will make big money over the very long term by trading predictive algorithms which rely on the past to predict the future.

Look at the Turtles.

Impressive returns. I would say the strategy probably suffers from a large dose of outcome bias and/or (controversially) lookahead bias. The returns are impressive but it’s a well know fact that quality/momentum worked well in the US over that specific sample period. A guess - the returns will be less impressive if tested on a Japanese universe where those particular factors worked less well. That strategy might be taking unintended long bets on healthcare stocks and short best on financial/oil stocks. This type of factor risk exposure should be avoided to build really robust multifactor strategies. Try employing the same stock selection process one MSCI sector at a time, match sector beta (optimisation algo) and aggregate into the final portfolio rebalanced monthly. My guess - IR drops materially. It’s a bit like building a portfolio that’s long the tech/healthcare ‘factor’ and short the banks/oil/industrial ‘factor’. Such a portfolio will do similarly well over that period but might be meaningless going forward.

I do not understand how this strategy could have "lookahead" bias. I agree that it may not work going forward if that is what you mean but the same could be said about each and every back tested strategy. Quality can never be a bad thing. Any sensible long term investor would do well to invest in quality stocks while realizing that in some periods he will be outperformed by the latest fads. In the long term however he is likely to do well.

Momentum many be a different matter. Traditional CTAs have certainly struggled with trend following this last decade. Who knows, perhaps the momentum effect may end.

There is no shorting in this strategy so the system is not taking short positions in finance, oil or any other stocks.

Perhaps for those who are really seeking what you describe as a "multifactor" strategy, the investor should simply stick to widely based general index trackers. There is certainly no shame in that and almost by definition each and every factor available in the market will be included.

I'm afraid we could talk ad infinitum but it all boils down to useless prediction and talk. And speculation. Quality is NOT a factor - it is a measure of financial prudence. But I can understand why you seek to define it as such.

To be perfectly honest, much of the talk on trading forums is simply "mathturbation" as Seykota would put it. As I ventured above, in the very long term perhaps the only likely chance of success is very wide diversification. Unless of course you have some built in advantage as do those who deal on inside information or have information on order flow.

Looking back to my time in Hong Kong in the 1980, I would not have gone too far wrong with wide global diversification. Japan included. You will never diversify away rotten eggs so you need to dilute them.

As I droned on about elsewhere, Imperial Russia, the Kaiser's Germany and Argentina were the hot markets in the 1900s. Wide diversification would have been a wise policy in those days; it remains so.

Lookahead bias was probably stretching the definition too far. What I meant was is that we all build strategies that we expect will work based on our own experience and what has worked in the past. The sample data usually starts at some point in the distant past (2002?) and at least some of our experience was gained during the sample period. Thus there is enourmous information leakage into a lot of trading algoritms.

You say there are no shorting in the strategy but it is measured against SPY so of course there are active overweight and underweight (short) exposures. There is no way the strategy screens for high ROE and are underweight (high ROE) healthcare and overweight (low ROE) banks.

Quality is definitely a well recognised factor in the industry. FTSE : "Following Asness et al. (2013), we consider quality as the consistent ability to generate strong future cash flows. We assess quality from several perspectives: profitability, operating efficiency, earnings quality (accruals) and leverage. "

@Zenothestoic

I do see there is "lookahead" bias. Very simple, there is function before_trading_start(context, data). All the entry evaluations happen after market open.

I think one of the important rules of before_trading_start(context, data) is to avoid "lookahead" bias. Maybe one could say this "lookahead" bias is not so worse since it trade weekly or monthly.

Thomas
All calculations are done on the previous night's close. In my algo anyway. There is therefore no lookahead involved.

Cyril
Totally agree on the dangers of backtesting. I agree that the weightings in this strategy will differ from an index such as the S&P although of course you could correct for that if you so chose. Nice quote from Asness. The word "Factor" always strikes me as somewhat akin to market timing. Now is the right time for this or that factor. I think my real point is that "quality" is not some fad, in vogue today out tomorrow. Quality is a consistent and lasting way of measuring companies which should last for the long term. But I am quibbling. People are of course free to use whatever terms they like.

@Zenothestoic

All calculations are done on the previous night's close.
Are you sure?

I see in your code the following:
... spy_ma50 = data.history(context.spy , "close", 50, "1d").mean()
...

Have you ever tried to put this code in the before_trading_start(context, data) and see if there is difference?

Thomas Chang

There are probably greater problems to worry about. In particular those raised by Cyril Bosch. Back testing is not the greatest tool for forward looking prediction.

I don't think there is any way around this.

@Zeno

I agree. Quality companies that have a sustainable competitve edge, pricing power, strong free cash flow and sensible management will perform well over the medium to long term.

I have to clarify my view on backtesting. I'm an avid believer in a robustly backtested, systematic investment approach. I do believe however that computer scientists with little/no actual domain knowledge might be easily fooled by spurious correlations and build strategies that overfit to sample data and does not generalise well in the real world. Building an algorithm that beats the market consitently (after ALL real world costs and frictions) is a problem far more difficult than Level V autonomous driving! An unlevered IR of 1 is the holy grail. The best I've ever achieved (1990-2020, rolling 3yr, unlevered) is around 0.8.

Cyril Bosch

I guess I have been back testing for nigh on 20 years and frankly it has lead me astray. Many times. I totally agree on autonomous driving - the markets are far less "bounded".

However I do agree on systematic investing. Which is why stock indices are so attractive. But of course even there they are not predictive. So long as economies thrive, so will stock markets and to a lesser extent commodities.

However the current crisis is a stark reminder that although the world has come through many crises pretty well over the period of the enlightenment there is no guarantee that that will continue.

I think all of this brings me back to the very same rather dull conclusion: the only systematic approach which has any validity in the very long term is wide diversification and regular re-balancing. Globally and over asset classes.

@Marc @Vladimir

Hey guys, thanks for your input and thanks @Chris for the original post. I took what you guys had and added some tweaks. I used TMF which is a 3x leverage bond ETF as the safety asset. I also added a stop loss and a trailing stop loss to try to limit downside risk while also maximizing profits. I am happy with the results and would like to try paper trading with it. I'm relatively new here but I have found that, unfortunately, we can't paper trade through Quantopian. So I was looking around and thought that pylivetrader would work well. However, I hit a roadblock when realizing that Optimize API is not currently supported by pylivetrader. I was trying to re-code to get around using the Optimize API but was having trouble. I'm unsure if there is a live trading platform that supports Optimize API or if Optimize is only something within Quantopian. Any help would be much appreciated! Thanks in advance!

@Austin
You don't need the optimize api for what you are doing.
Simply replace lines 157-162 with this:

    for sec in total_weights.index:  
        order_target_percent(sec, total_weights.loc[sec])  
    for sec in context.portfolio.positions:  
        if sec not in total_weights:  
            order_target_percent(sec, 0)  

I am astonished that anybody could refer to a triple leveraged 20 year bond as a refuge asset. In the early 1980s Fed Chairman Volker raised interest rates sharply over a 6 month period. The long bond lost 40% of its valuem. Go figure, as they say.

@Zenothestoic when I saw the approach, negative skew and the LTCM came to my mind...

@Zenothestoic Well pointed out! Didn't see the safe asset was changed with the leveraged TMF etf. I would honestly like to re-elaborate this one adding some volatility based logic for asset switch and a not leveraged safe asset.

Incidentally, with my son on lockdown with us, we have been talking much about high debt to equity. Me in relation to this model and the "hidden" leverage it supplies, my son in relation to the private equity sector. My accidental discovery here (thanks to one of the participants sorting the debt equity ratio in the wrong direction) is that investing in the highest debt to equity ratios means that you are investing in highly leveraged companies which will or rather can produce outsized returns.

If that leverage is used well - bingo, you will get great performance. It provides the investor with a leveraged return on a non recourse basis. In other words the investor is not taking the risk of margin finance and thus can not lose more than his invested capital.

My son works in M&A and apparently this technique is usually used by Private Equity firms to boost the return on their equity. They take a small but controlling equity stake in a company but finance the leveraged buy out mostly by debt, Usually preference shares on which they get a hefty rate of return.

Typically returns on PE are 20 to 30% and much of this is achieved by the leveraged effect on their relatively small equity participation.

There is definitely a place for leverage but it must be done in an intelligent fashion and based on a real understanding of the fundamental realities behind markets and economy.

It is naive in the extreme to rely on back testing alone - especially since the data available through Quantopian and the ETF market does not cover all market situations. 1980, 1929 to 1933 for instance.

Incidentally, the high debt equity model I put together and intended to trade is holding up reasonably well so far. I'm glad I did not start trading it before the downturn but I still intend to trade it in moderation if I can find the right platform to automate it on.

In the meantime I am whiling away my days investigating Tim Sykes and Penny Stocks.

Just a thought...
Would pulling historical ROE and linear regressing the data give us a new metric of how well an asset is improving its financial performance? This additional layer could be used as a long term exit strategy.

I applied the ROE idea to a mean reversion strategy. It screens for stocks that have a sudden drop from long term moving average (200 days). Its then buys the stock with the best ROE from the screened list of stocks. Each stock is held for a week.

It is quite successful at identifying stocks primed for a dead cat bounce. Draw downs are less than impressive but clearly it's picking more winners than losers. Could very much benefit from a stop loss order but I don't have much experience with those type of orders.

I would like to find a way to buy a stock every day and hold for a week. Can portfolio.postitions tell you how long you have been holding your stock? Either that or i copy and paste this code 5 times to trade each day of the week.

Wow interesting metrics. idk if portfolio.positions keeps track of duration, but u can add a dictionary to keep track of it. Probably easier to work with than copying the code multiple times.

This is completely separate form the strategy I just posted.

This is a clone of the initial strategy posted except I replaced the bond module with my VIX hedging module and the returns are absurd.

Can anyone tell me if the VIX strategy is realistic? I don't think there is a way to get a straight short sell on TVIX or UVXY IRL (TD says the market is illiquid).

I'm currently trying to implement the strategy on my live fund but I cant find an effective way to short the VIX.

Hit me up with any suggestions.

My apologies, I didnt realize my previous post used 2x leverage.

Here is the back test and code that fixed leverage to 1

You could short the VIX future instead. I don’t think it’s possible to simulate this on Q though (they also stopped sporting futures end of 2017 I believe). I’d be super careful with it as well, and only paper trade to start with at least.

Although certainly not ideal, I would recommend using a selection of short-Vix ETF's or ETN's.

@Mike Belliveau: That strategy is exactly what I meant with my comment above! I was implementing it using (as Kristof said) vix ETFs

@Mike, note that in your first iteration (with 2x leverage), TVIX all by itself represented 97.79% of all generated profits. That is almost like playing on a single position even if the strategy traded 408 different stocks which accounted for the remaining 2.21% of total profits.

The RETURN is somewhat 'shaky'. If you change the stock number to 5 or to 15, 20, the RETURN will very quite a lot. And the leverage will increase to 2.

Thank you all for the feedback !

@Joakim @Kristof I have come to accept that futures and inverse ETF's are looking like my two only options. I have run backtests using ZIV and SVXY but results were poor. The next most inversely correlated ETF to the VIX is TQQQ which I been using as the hedge security for my live trading(I'll attach the pure TQQQ/VIX backtest). I'm adverse to VIX futures because once VIX spikes and its time to short it, there is so much implied volatility the high premiums make it not worth the risk. I have have previously lost a lot of money experimenting with short term VIX options and capital restraints prevent me from purchasing longer term contracts, again due to the high implied volatility.

@ Simone It really makes sense. I had the VIX strategy in my pocket and hadn't considered tacking it onto another strategy until i saw the ETF back test from Austin.

@Guy Essentially the strategy functions as a tail risk hedge. Although the VIX strategy generated 97% of the profits, it accounts for for less than 1/3 the leverage. The VIX strategy on its own is extremely volatile but has significant jumps (up 1000% due to Corona sell off). Allocating a the majority of leverage to a solid base strategy such as Quality companies in an uptrend allows you to limit the draw-downs from the VIX strategy while also priming the portfolio to spike in the case of a sell off.

@Thomas Of course the degree of diversification will have a major impact on returns. My initial mean reversion post only bought 2 stocks and held them for a week(now that's 'shaky'). The less stocks you hold, the more risk, the more reward. 10 seems like a reasonable # of stocks to hold for the base strategy. The reason leverage increases is due to the code in the initial strategy posted. There are two variables you need to change if you wish to change the # of stocks you hold, other wise the leverage will be multiplied by a factor of however many fewer stocks you use from 20.
These are the two variables to change in order to adjust # of stocks to hold without affecting leverage.
context.Target_securities_to_buy = 10.0
context.top_n_relative_momentum_to_buy = 10

I apologize for taking this of course from the initial strategy, but in the spirit of developing composite strategies, I've meshed together the mean reversion from my initial post and the VIX hedging strategy.

This will be the last back test I post for a while as I don't think the returns can get much better without >1 leverage. Shout out to Connors Research for the amazing initial strategy. I learned much from it.

@Mike, you must have done the following test!

@Guy This is the best I can do in making the VIX strategy work without shorting the VIX(Long TQQQ instead). I have not found a way to prevent that October drawdown but I'm pretty sure it's the date that TVIX first started trading (twas extremely volatile early on) . Lowering the long tvix portfolio weight to .5 will make it back test better.

This strategy works best when set up as a tail-risk hedge for a more consistent strategy like Quality Companies in an Uptrend. The TQQQ non-VIX-shorting version can be implemented live but I'm currently re-evaluating its implementation. I'm going to learn more about VIX futures since it is clear the strategy is only effective when you can short the VIX. If anyone can suggest of any other assets that inversely correlates to VIX that could be helpful. Does anyone have experience with buying UVXY puts? If so, please reach out! (extremely helpful)

Here is the video where the author talks about UVXY
https://www.youtube.com/watch?v=H4AUJj9WsVQ&list=PLrnSgovOmBKZkn02B2J7tpta_jUxehSPs&index=39&t=0s
He uses EMA to timing UVXY from what I understand.

If to use options, I think the best way to short VIX is to buy VIX CALL options, but I am not sure if there is data available to backtest it.

@Vladim Thank you for the video! Did you mean sell VIX CALL options? Doesn't make since to buy calls when we're trying to short the asset.

Does this article line up with what you were trying to say @Vadim? https://www.investopedia.com/articles/optioninvestor/06/newvix.asp Specifically this quote, "Similarly, buying puts (or bear put spreads, or selling bear call spreads) can help a trader capitalize on moves in the other direction."

@Mike
Yes, Sorry, if you are trying to short VIX, you need to buy VIX puts, not calls.
Selling naked options is not a very good idea, it can bite you very badly.
I thought you are trying to use VIX as a hedge against SPY or QQQ decline, in this case VIX will raise, and to capitalize on VIX raise you want to buy VIX calls.
The only question is timing. If you can get timing right, it can be very profitable.
Another popular way to hedge is to buy SPY puts, but it is also question when to buy them. When you see that SPY went down, typically puts are very expensive already, and you won't get a proper return. So again it is the question when it is proper time to buy SPY puts.

@Vladim I am quite confident in the timing of the factors but only if the type of order is a straight short sell on VIX. The beauty of the strategy is that it trades long and short VIX based on recent momentum changes. In bull markets it makes significant gains through shorting the vix. It also preforms well in the case of a sell-off since you will long VIX once recent momentum ticks up (tail-risk hedge scenario ). The VIX is an index just like QQQ and SPY except it is a lot more volatile (and profitable). The 3 and 5 day moving average crossover does a good job at capturing this trend. The only problem is in executing the strategy and finding a way to short VIX without short-selling.

edit: Just realized you were talking about the timing of the options. It is a bit beyond my expertise, but i did find previous success in buying VIX puts ATM two weeks out. The problem is the premiums on VIX options are so expensive I'm in no place to risk that much capital trying to experiment and find out.

@Mike,

Then to short VIX is really the best way to buy VIX puts or VXX puts.
You also can make a case with buying SVXY or ZIV ETF, but I would be very scary to hold these overnight.
VIX spike like on February 2018 was very bad.
Another popular ETF XIV was eliminated during that time. for example:
https://www.marketwatch.com/story/xiv-trader-ive-lost-4-million-3-years-of-work-and-other-peoples-money-2018-02-06

Good morning all.
Nice thread by the way, nice work.
Just two dumb questions:
-Is it using quarterly, anual or TTM data for fundamentas?
-Is the backtest accurate? I mean, while making the bakctest, fundamental data on f.e. 31/01/2020 it would use the data of 31/12/2019, but if we really were on 31/01/2020 would the data of 31/12/2019 accesible and updated on morningstar? I dont know whats the delay
Regards

@motxumotxu JJ Good questions. I'll try to answer...

Is it using quarterly, annual or TTM data for fundamentals? The Morningstar data is generally based on quarterly data. There are some valuation ratios however, which are TTM. Its best to do a quick check before making any assumptions. If a number seems 4 times larger than a quarterly figure then it's TTM. In this case the roe number is quarterly and calculated from net quarterly income / average total common equity.

Is the backtest accurate? I mean, while making the backtest, fundamental data on f.e. 31/01/2020 it would use the data of 31/12/2019, but if we really were on 31/01/2020 would the data of 31/12/2019 be accessible and updated on Morningstar? Quantopian only exposes data to a backtest which would have been known as of each simulated backtest day. So, "would the data of 31/12/2019 be accessible on 31/01/2020"? That depends upon when the company made its quarterly report publicly available. Every Morningstar field has two dates associated with it - the asof_date and the timestamp. The asof_date in this case would be 31/12/2019. However, the timestamp is when Quantopian, and presumably the market, first learned about the data, typically 1 day after the filing date with the SEC. This is the first date that this data will be surfaced in pipeline. There is more description of this in the documentation (https://www.quantopian.com/docs/data-reference/morningstar_fundamentals#point-in-time). Therefore, the answer is yes, the backtests are accurate and meant to simulate trades which would have taken place by an individual with information they would have known at the time with no lookahead bias.

Hi Dan.
Thanks for your answer. Very clear. Ill take a deeper look.
Regards.

In a sense of course Dan is right but he probably expresses himself in an unfortunate manner: "the answer is yes, the backtests are accurate". As regards what he is saying about the timing of releases of fundamental data it is of course important not to back test on any data not available at the relevant point in time.

However the "accuracy" of any back test is a dangerous assumption. Bearing in mind the many, many thousands of lines of code in Zipline and the vagaries and inaccuracies of" historical" data.

The slightest change in data or code can have a beneficial or detrimental cascading effect spreading like a virus through any backtest. The only true "accuracy" concerning a given strategy comes with trading it forward. Even then changes of code, methods and data will creep in month by month going forward.

Regarding backtests, I have run an identical strategy on both Quantopian and Quantconnect. This strategy here which I posted above. The strategy yields a return of 30,000 % here on Quantopian and a mere 6,000% on Quantconnect.

I challenge anyone to talk about the "accuracy" of back testing given this context. Furthermore as someone who has been backtesting and trading for almost 30 years I can tell you categorically that the future never looks much like the past.

Feel free to take or leave my reflections as you choose. But my advice would be never to use the words "accuracy" and "backtesting" in the same sentence.

Hello all.
I'd add/change some filters.
For the stock selections, i'd filter by sectors that are outperforming SP500 measured by RSCmansfield and only buy stocks that belong to those sectors.
I'd also try to test how the strategy behaves filtering stocks to buy only those that break their 52's weeks high instead (or in addition) the momentum.
I mean conditions to buy: fundamental filter + sector "power" + 52's weeks high break + sp500 momentum (+stock momentum?). If sp 500 doesnt match the technical criteria stay in bonds. (no change).
Can anyone code something similar?
Regards.

I find its interesting that if you take out the bonds, the return looks a lot worse.
Bonds plays a huge role in this . Why is that, should we only buy bonds?

Bonds plays a huge role in this . Why is that, should we only buy bonds?

My naive understanding is that it's partially because treasuries are treated as a safe-haven asset. When the stock market smells trouble, people flee to bonds, which drives up their price. The other major reason is that over this entire period we've been in a massive "falling rates" environment. When the interest rate falls the value of existing bonds increases so that the fixed value they pay out matches the new rate. Can rates continue to fall forever? Who knows. We're already pretty close to zero. To me it seems that the same pattern can't continue forever.

Mr V Hawk is neither wrong nor naive. In times of great uncertainty people run to Sovereign bonds of a safe haven country and at these times stock and bonds often become strongly negatively correlated.

As to rising interest rates, these do not, in my view, present a problem unless the rise is too steep and too swift as in 1980 when the long bond crashed almost 40% after Volker's series of big rate increases.

Even then, simple mathematical analysis shows that 90% + of bond performance comes from coupon not price. In a rising interest rate environment, a fall in price is overcome over time by increased coupons.

I wish I could post my analysis here but perhaps I will post it on my website and link it.

In any event, even in the case of low to zero interest rates you can still expect sovereign debt to spike up in a crisis. Leading to temporarily negative yield if necessary.

The reason is simple: there is no where else to go.

Rather like when we briefly saw negative front month crude prices - no storage, no capacity to take delivery.

good gravy!

I am still having a hard time understanding this bond part

def trade_bonds(context , data):
amount_of_current_positions=0
if context.portfolio.positions[context.bonds].amount == 0:
amount_of_current_positions = len(context.portfolio.positions)
if context.portfolio.positions[context.bonds].amount > 0:
amount_of_current_positions = len(context.portfolio.positions) - 1
percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)
order_target_percent(context.bonds , percent_bonds_to_buy)

if I am going to buy 3 stock(context.Target_securities_to_buy), I have 5 in hand, my current position would be 4( amount_of_current_positions-1)
so 3 / 4 * (1/3)= 1/4
How'd this number relate to the stocks we are going to buy?

@ ZT YE,

Poorly chosen variable names may be contributing to your difficulty understanding the code.

A better name for 'amount_of_current_positions' is 'number_of_current_stock_positions'.
You find the total number of current positions using len(context.portfolio.positions).
But what you want to know is the number of stock positions.
Keep in mind that context.bonds may be one of those positions. So you need to check for that.
If it is one of your positions, then subtract 1 to get the number of current stock positions.

Another poorly chosen variable name is context.Target_securities_to_buy.
A better name for this would have been context.max_number_of_stocks_to_hold.
The reciprocal of this number gives the weight for new stock purchases.

To calculate open slots take the difference between the max number of stocks that you can hold and the number of stocks that you currently hold. If you have open slots , but don't have replacement stocks to fill those open slots (because the market trend is currently negative), then you would fill the open slots with context.bonds.

To calculate the weight that to use for the bond order (order_target_percent), simply multiply the number of open slots by the weight that you would have given to new stock purchases.

Example: max number of stocks to hold = 10, current number of stocks held 6, open stock slots = 4, target weight of bonds 40%.

@Steve Jost, many thanks for your explaination
To sum up it looks like this:

def trade_bonds(context , data):  
      amount_of_current_positions=0     #current number of stock positions  
   if context.portfolio.positions[context.bonds].amount == 0:  
      amount_of_current_positions = len(context.portfolio.positions)  
   if context.portfolio.positions[context.bonds].amount > 0:  
      amount_of_current_positions = len(context.portfolio.positions) - 1      #Calculate the current stock position because when Bond+Stock, it must be over 1.#so we substruct 1 from the total position to get stock position  
      percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)  
   #percent_bonds_to_buy  =( Max_stocks_to_hold - Current_stock_positoin)* (1/Max_stock_to_hold),  
  #---- Max we can hold minus current stock position gives the slot for us to buy bonds. divide by max stocks we can hold to  
                                                                   obtain the percentage that we can buy bonds.  
   order_target_percent(context.bonds , percent_bonds_to_buy)  

I am still confused on this part:
amount_of_current_positions = len(context.portfolio.positions) - 1
current_stock_position = len(tota_current_position) - 1, but why minus 1? Stocks can go up to 1, minus one means the left are bonds? confused

@ ZT YE,

With that statement you are trying to find the number of current stock positions, not the percentage.
If you have set context.Target_securities_to_buy =10 (a better name as I mentioned is 'context.max_number_of_stocks_to_hold'),
then 'current_stock_position' in your equation above can be a maximum of 10.

Variable names can be whatever you choose, but do yourself a favor and pick a name that distills the essence of the thing.
So instead of calling it 'current_stock_position', maybe call it 'number_of_currrent_stock_positions'.

@Steve Jost
If its not percentage, why 1? Why not minus 2 ?
I mean we could have more than 1 unit of bond?
Say we can hold up to 10 securities, and currently we have 8, what does it mean to minus 1 here? We only have 1 bond?

@ ZT YE

You are subtracting the number of bond positions.

The number of bond positions is 1 if you own bonds, regardless of how many units you own.
For example you could have 30% of you portfolio in bonds but that would still only be 1 position.
If you have 0% in bonds then the number of bond positions is 0.

Are you getting tripped up on len(context.portfolio.positions)?
Not sure what your level of python knowledge is, but context.portfolio.positions is a dictionary with key:value pairs.
The keys are the securities and the values are various attributes that we don't really care much about at this point.
The len() function counts the number of key:value pairs in the dictionary which is the same as counting the number of keys or securities.

@Steve Jost
I think i kinda get it after your explanation. and I ran the code with log.info()

def trade_bonds(context , data):  
    amount_of_current_positions=0  
    log.info(len(context.portfolio.positions))  
    log.info(context.portfolio.positions)  
    if context.portfolio.positions[context.bonds].amount == 0:  
        amount_of_current_positions = len(context.portfolio.positions)  
        log.info(amount_of_current_positions)  
    if context.portfolio.positions[context.bonds].amount > 0:  
        amount_of_current_positions = len(context.portfolio.positions) - 1  
        log.info(amount_of_current_positions)  
    percent_bonds_to_buy = (context.Target_securities_to_buy - amount_of_current_positions) * (1.0 / context.Target_securities_to_buy)  
    log.info(percent_bonds_to_buy)  
    order_target_percent(context.bonds , percent_bonds_to_buy)  
2005-06-30 20:40 trade_bonds:88 INFO 12  
2005-06-30 20:40 trade_bonds:89 INFO {Equity(23555 [WYE]): Position({'last_sale_date': Timestamp('2005-06-30 19:40:00+0000', tz='UTC'), 'asset': Equity(23555 [WYE]), 'last_sale_price': 44.45, 'cost_basis': 43.330999999999996, 'amount': 119}), Equity(1406 [CELG]): Position({'last_sale_date': Timestamp('2005-06-30 19:40:00+0000', tz='UTC'), 'asset': Equity(1406 [CELG]), 'last_sale_price': 40.7, 'cost_basis': 42.031, 'amount': 123}), Equity(26578 [GOOG_L]): Position({'last_sale_date': Timestamp('2005-06-30 19:40:00+0000', tz='UTC'), 'asset': Equity(26578 [GOOG_L]), 'last_sale_price': 295.40000000000003, 'cost_basis': 274.101, 'amount': 18}), Equity(7677 [TXU]): Position({'last_sale_date': Timestamp('2005-06-30 19:40:00+0000', tz='UTC'), 'asset': Equity(7677 [TXU]), 'last_sale_price': 83.04, 'cost_basis': 80.421, 'amount': 64}), Equity(24518 [STX]): Position({'last_sale_date': Timestamp('2005-06-30 19:40:00+0000', tz='UTC'), 'asset': Equity(24518 [STX]), 'last_sale_price': 17.490000000000002, 'cost_basis': 21.121000000000002, 'amount': 245}), Equity...  
2005-06-30 20:40 trade_bonds:92 INFO 12  
2005-06-30 20:40 trade_bonds:97 INFO 0.4  
2005-07-29 20:40 trade_bonds:88 INFO 21  
2005-07-29 20:40 trade_bonds:89 INFO {Equity(67 [ADSK]): Position({'last_sale_date': Timestamp('2005-07-29 19:40:00+0000', tz='UTC'), 'asset': Equity(67 [ADSK]), 'last_sale_price': 34.122, 'cost_basis': 34.041, 'amount': 160}), Equity(22406 [UPL]): Position({'last_sale_date': Timestamp('2005-07-29 19:40:00+0000', tz='UTC'), 'asset': Equity(22406 [UPL]), 'last_sale_price': 37.88, 'cost_basis': 37.900999999999996, 'amount': 144}), Equity(22983 [FRO]): Position({'last_sale_date': Timestamp('2005-07-29 19:40:00+0000', tz='UTC'), 'asset': Equity(22983 [FRO]), 'last_sale_price': 42.37, 'cost_basis': 42.341, 'amount': 128}), Equity(1406 [CELG]): Position({'last_sale_date': Timestamp('2005-07-29 19:40:00+0000', tz='UTC'), 'asset': Equity(1406 [CELG]), 'last_sale_price': 47.79, 'cost_basis': 42.031, 'amount': 123}), Equity(3660 [HRB]): Position({'last_sale_date': Timestamp('2005-07-29 19:40:00+0000', tz='UTC'), 'asset': Equity(3660 [HRB]), 'last_sale_price': 57.02, 'cost_basis': 56.931000000000004, 'amount': 95}), Equity(8655 [INTU]): Position({'last_sal...  
2005-07-29 20:40 trade_bonds:95 INFO 20  
2005-07-29 20:40 trade_bonds:97 INFO 0.0

...

2005-10-31 21:40 trade_bonds:88 INFO 21  
2005-10-31 21:40 trade_bonds:89 INFO {Equity(25217 [PKZ]): Position({'last_sale_date': Timestamp('2005-10-27 20:00:00+0000', tz='UTC'), 'asset': Equity(25217 [PKZ]), 'last_sale_price': 54.92, 'cost_basis': 54.471, 'amount': 107}), Equity(1539 [CI]): Position({'last_sale_date': Timestamp('2005-10-31 20:40:00+0000', tz='UTC'), 'asset': Equity(1539 [CI]), 'last_sale_price': 116.28, 'cost_basis': 115.26100000000001, 'amount': 48}), Equity(22406 [UPL]): Position({'last_sale_date': Timestamp('2005-10-31 20:40:00+0000', tz='UTC'), 'asset': Equity(22406 [UPL]), 'last_sale_price': 52.6, 'cost_basis': 37.900999999999996, 'amount': 144}), Equity(24840 [ATI]): Position({'last_sale_date': Timestamp('2005-10-31 20:40:00+0000', tz='UTC'), 'asset': Equity(24840 [ATI]), 'last_sale_price': 28.55, 'cost_basis': 30.971000000000007, 'amount': 189}), Equity(13197 [FCX]): Position({'last_sale_date': Timestamp('2005-10-31 20:40:00+0000', tz='UTC'), 'asset': Equity(13197 [FCX]), 'last_sale_price': 49.61, 'cost_basis': 49.581, 'amount': 111}), Equity(7844 [USG]): Positio...  
2005-10-31 21:40 trade_bonds:92 INFO 21  
2005-10-31 21:40 trade_bonds:97 INFO -0.05  

We have bonds position = 0, Target_securities_to_buy = 20, amount_current _position = 12
so 20-12 / 20 = 0.4 buy 40% bonds

then, we have 21 positions total, amount_of_current_positions = 21- 1 = 20
20-20/20 = 0 buy 0% bonds


To sum up:

1 means that we only buy one kind of bond which is -- context.bonds = sid(23870)
but we may pick different stocks, so the number of stocks could be up to 20 in this code.
if whenever the bonds position > 0, it means we brought bonds, otherwise its 0.

@Steve Jost, thanks for your explaining. I really appreciate it.
I am getting started with python not long ago. I was trying to shift from finance background and trying to go further with learning coding into quant trading. Its a headache learning by myself and I am glad that quantopian provide such a great platform with everyone to be forthcoming and helpful with questions.

@Vladimir, I am research into your code. Thanks.

@ ZT YE - Yes, you got it!

HI pros,

Just trying to build the pipeline screener for this strategy.

I'm starting in quantopian, could anybody help me how can i code the skip days using Returns() function?.

Im noticed that the way used in this algo is pct_change() but returns() include dividens.

Thanks in advance

@ Ramon Gonzales,

Assuming your momentum look-back is 126 days and you want to skip the last 10 days?

Define a return factor with window length of 126-days and another factor with window length of 10-days.
Then subtract the two factors.

Here's an example:

    # Factor of momentum with skip days  
    roc_126d = Returns(mask=top_ranked_by_fundamentals,  
                       window_length=126)  
    roc_10d = Returns(mask=top_ranked_by_fundamentals,  
                      window_length=10)  
    momentum = roc_126d - roc_10d  

Easy! thanks!

Hey Folks... Here's a long/short adaptation of this algo.

No momentum factor.
No trend indicator.
20 longs - top quality.
20 shorts -bottom quality.

The edge seems to dissipate within the last 5 years. Any ideas what fundamentals could be added to help distribute the alpha more evenly. What enabled the edge prior to 2015 and is there a fundamental factor explaining the behavioural change in the market post 2015?

@Theo,

Without momentum it's just pure value investment and value overall was abnormally cheap the last 10 years. I don't believe there is a combination of factors that would give you that edge you are looking for. People just don't buy cheap and good(-ish) companies these days.

Good news its that's it's unlikely to be a shift in a paradigm, but just temporary trend of mis-pricing.

I recommend reading this analysis Is Systematic Value Investing Dead for a better picture

@Mikhail, you say:

I don't believe there is a combination of factors that would give you
that edge you are looking for.

I think there are whole families of strategy solutions that would turn out superior results contradicting that assertion.

I have enhanced the trading strategy used here a lot. Changed the stock selection method to select up to 400 stocks, added a fair amount of code to weight those stocks differently, applied long-term governing equations to the mix, added boosters and amplifiers while modulating the use of leverage, and forced the strategy to seek out more volatile stocks. As protective measure, instead of going to bonds or cash during periods of market turmoil, the strategy went to modulating its shorts as well but to a lesser degree. The bet sizing process followed its payoff matrix equation which I often cite in these forums.

The object of a stock trading strategy is to trade enough and profitably to supply more funds in order to trade even more profitably going forward. Thereby putting your trading strategy on steroids and achieving a higher CAGR than the expected long-term market averages.

So, yes, I do think that anyone can easily do better than what has been shown in this thread.

Take a look at the notebook in the following post which shows the results of the 4th walk-forward simulation of this modified trading strategy. It displays phenomenal numbers, even if I have to say so myself.

https://www.quantopian.com/posts/what-i-have-seen-over-the-past-few-weeks#5ee584dffa40b2003edf555c

[Edited] The attached notebook in the above link does not display any of the charts. See the attached notebook instead.

[Re-edited] There must be something wrong for the moment when displaying notebooks. Even the attached notebook shows no charts. Sorry.

[Re-re-edited] June 22.

The attached notebook does not display any of the charts. So, here is a HTML version of the same. This way you should at least see the charts.

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-4.html

@Guy

Absolutely, there are ton of things you can use in your strategy and do vastly better than market. I was answering to Theo, who removed everything but value factors and questioned why they don't perform well.

@ Tony Morland

Just a question regarding the technique of the Markov chains as you mentioned it that you were using it.
If you have, let’s say, 50 factors (PE, reduction of shares and so on) how do you determine what factors are up using Markov chains?
Specially how do you finde the hidden elements, which could be volatility, interest rate or what ever?
Can you point to a literature which explains this.
Thanks
Carsten

People just don't buy cheap and good(-ish) companies these days.

I prefer to view it from the other side. Cheap companies are failing at reinvesting to grow shareholder value these days.

@Vladimir

The description of the strategy you presented fits my personal investment goal.
Thanks for sharing.
I backtested your original algorithm with line 9 commented. Why cheat myself.
Results metric is good.
When I tried to use order_optimal_portfolio() results got worse.
I checked some positions (Backtest -> Activity -> Positions) and have some questions about your ordering engine:
if TF_filter==False all positions in top_n_by_momentum should be sold or only part of them?
I have seen the number of stock position slowly changing from 20 to 0 during several months in market downtrend.
Why at initial capital 100000
2003-03-31 there was negative cash 68000 that is leverage 1.68
2007-07-31 there was negative cash 50000 ...
In one of Joakim Arvidsson long-only strategy I have seen negative position in bond (-80%) together with 20 stock positions?
May be we need to fix engine first before we start send long-only strategy to the sky?

While many people on this thread trying to implement this strategy on a real live trading system, I saw your post mentioned you also want to implement this strategy for your personal investment. As probably one of the best coders in this forum, could you please be so kind to share with us what is your approaches/thoughts on implementing this strategy for live trading?

Thank you in advance.

Caramel

@Caramel

Saying "the strategy you presented fits my personal investment goal"
I mean going long quality companies with positive momentum in an up trending market
otherwise switching to bonds is my investment approach.
But I never even try to paper trade the strategy.
Zenothestoic was very close to live trading the strategy through Quantconnect.

@ Vladimir

Thank you for the clarification.

@ Caramel,

I got this up and running in a live environment through the Interactive Brokers API. I took it on as a covid-19 lockdown exercise. I decided to switch from paper to liver trading with it a few weeks ago when the SPY SMA crossover indicated we were back to bull. Maybe that was a bit too early, I'm currently down a little but I don't have a life changing amount of money running with it, just enough to make it interesting ;). I've decided to run with it as my whole motivation was to trust the algorithm and take the emotion out of trading decisions.

Some things I learned porting this to IB are below. Maybe these aren't exactly what you are looking for as they are implementation details rather than refinements to the algorithm:

  • The IB API is completely different. Porting involves starting from scratch essentially.

  • The IB API uses asynchronous calls, I wasted quite a bit of time trying to figure out how to handle this properly. However, there is an open source set of wrapper classes out there though that make things synchronous and a lot easier.

  • The Morningstar data used in Quantopian is expensive in IB, over $14 per month. Given the amount of money I have involved it didn't make sense to subscribe to this, especially considering I'd only be using a small portion of what Morningstar give. IB give access to basic fundamentals through the API, I have had to do some additional calculations to get the same values as Quantopian/Morningstar. I haven't been able to get ROIC quite right. I'm hoping the value I have for it is close enough not to make a huge long term difference to the combined quality metric.

  • I found instances where Quantopian wasn't actually using the correct Morningstar data at given points in time, it could be a year or more out of date. My live trading algo should be "better" but it remains to be seen how much this problem in Quantopian skewed the back testing results.

@ El Rojito

Thank you for the information.

I think it would be useful to many other people on Q trying to implement this strategy, such as @ Zenothestoic, @seers quant, @Shaun Murphy, @Carsten and, @ Peter Harrington too.

@ El Rojito

Did you had a look at backtrader?
It has an integration with IB.
From the method it looks quite similar to Quantopian/Zipline, but I never took a closer look how difficult it would be to port a strategy

@Carsten

I need to revisit backtrader, I had gone all in on the IB API initially and the library I found, ib_insync, is more aligned with that. Backtrader has some impressive advanced features however that I would like to experiment with now that I am bit more comfortable with live trading.

I don't want to take this thread too far off topic, it is truely excellent. I am hoping my original post was saved by its discussion of getting Morningstar data in a live env, given it is so integral to the algorithm here on Quantopian. However, feel free to message me directly though if you wish to discuss further.

Is anyone running this strategy successfully in Live Trading ? if yes please provide some stats around it.

That´s what I really want to know... Live Trading, it seems it´s to be only on backtesting in Quantopian, I tried zipline config but I couldn´t use the Fundamentals to get the roe. Did anybody try to use this strategy using zipline ?

Completely dumb question may be but I am trying to understand these lines in the past algorithm here

sma_1 = SimpleMovingAverage(inputs=[USEquityPricing.close],window_length=200, mask=universe)
sma_2=SimpleMovingAverage(inputs=[USEquityPricing.close],window_length=2,mask=universe)
sma_change=(sma_2-sma_1)/sma_1 #momentum factor

recent_sma_zscore = sma_change.zscore()

How are you and why are you calculating a zscore on the sma change?

I tested the algorithm for the last 2 years (01/01/2018 to 05/31/2020) but the results were not much exciting, we have to find out relevant information about the securities and maybe change from time to time. I believe the behavior of the market change from time to time, some are relevant in a period and some others not in another period.

@Luiz, you say:

“... but the results were not much exciting …”

I would tend to disagree with that. You could extract a lot more from the above strategy. However, you might have to change the perception of what is possible and how far you want to go. A two-year simulation is not enough data to show the merits of a trading strategy. Not enough diversity, not enough time to see underlying trends (should you need some), not enough market events to generalize. Portfolio management is a compounded return game and time is a major ingredient in that equation.

The structure of a trading program can predetermine how it will trade as if you were designing some mathematical contraption of what your program might do. This will hold for a prescheduled rebalancing portfolio.

I also disagree with:

“... I believe the behavior of the market change from time to time
...”

You design a trading strategy that does not adapt to market data or that is poorly designed and the market is wrong, the market has changed because your model does not fit anymore. How convenient. Is it so hard to accept that what you designed is a poor model and that you need not change the market or see it differently but change the trading strategy instead.

To gain an idea of what could be done, look at the following file:

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-4.html

It is the HTML version of its originating notebook. Even though that simulation was the 4th short-term walk-forward, its last two years showed phenomenal results. Way beyond what we usually see on Quantopian and kind of contradicts your statement: “... for the last 2 years ... results were not much exciting...”.

The trading principles used, in the above HTML notebook, include modulated leveraging, shorting in periods of market turmoil instead of just going to cash or bonds among other things. The strategy does take risks, but still manages reasonably well.

Nonetheless, it is always a matter of choice as to how far you want to go. Should you find the strategy beyond your level of risk averseness, then simply scale it down to a more acceptable level. See how the strategy progressed with its added risks in its previous iterations:

https://alphapowertrading.com/quantopian/Ranked_Selection_NB.html

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-2.html

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-3.html

What I would say is: If you go for conventional trading methods, you should get what should be expected: conventional results.

However, if your trading strategy does have a positive edge above its leveraging costs, you can compound your performance higher by recycling every penny generated. Simply, it would produce something like this: \(F(t) = F_0 + \$X = F_0 + \Sigma (H ∙ ΔP) = F_0 ∙ (1 + g + \alpha – exp_t)^t\). If you want to exceed market averages, it would be sufficient to have \( \alpha – exp_t > 0\) which is what the above HTML files show. The higher your alpha, the better. One could also point out: no alpha, no candy.

BTW, I also did the HTML file: Ranked_Selection_NB-5 which pushes the performance level even higher. But, I am not showing that.

Yes, anyway, it´s just a start, I´m learning a lot with these algorithms.

@Guy Fleury:

You design a trading strategy that does not adapt to market data or that is poorly designed and the market is wrong, the market has changed because your model does not fit anymore. How convenient. It is so hard to accept that what you designed is a poor model and that you need not change the market or see it differently but change the trading strategy instead.

APPLAUSE! :D

I totally agree with you, and that is one of the main reason why I think technical analysis strategies fail overtime.

By the way, I've tested some changes on this algo including some volatility logic and obtained far better returns with lower Drawdown (around 30%) - indeed testing for the last 20years or so.

ps. Guy: I would love to have a chat with you. Any way we can get in touch?

Curve fit nonsense, the whole thread. Fools dancing on the deck of the Titanic. Including my own efforts.

I tend to agree.

I stated before that the modifications to this strategy (which more than tripled in size) has made it to trade mainly on market noise (which it also did before).

On such a premise, it almost surely precludes over-fitting in the conventional sense.

For that matter, it would not even make sense mathematically to ascertain any kind of over or under-fitting in such a scenario without providing corroborating evidence of some kind. Something more than just an “opinion” would do. I would accept any academic paper, even on the flimsiest of evidence, demonstrating that trading on market noise will generate alpha and lead to over-fitting on prolonged time intervals. At least, there would be some data making that point.

I could see an exception in the form of some outlier, an extremely lucky streak for instance. But an outlier might not, in all probability, stretch the extent of a 17-year time interval. To statistically over-fit 140,000+ trades over the life of a portfolio would be stretching it really really way beyond realistic expectation probabilities.

The expected value of trading on market noise is very simple to determine, it has been so for centuries. It is zero.

We get: \(F(t) = F_0 + \$X = F_0 + \Sigma (H ∙ ΔP) \to F_0 \) which implies \(\Sigma (H ∙ ΔP) \to 0 \) meaning that the strategy's whole payoff matrix, whatever its composition, would tend to zero. Even more so the longer you played the game.

There is no edge there, and understandably, it also means that \(\Sigma (H ∙ ΔP) = n ∙ x_{avg} \to 0 \) where your average net profit per trade tends to zero since n is certainly not zero, nor will it tend to zero. In fact, in trading, n is a monotonic ever increasing function, it only goes up.

Designing a long-term winning trading strategy would uphold the hypothesis that the market is not purely random but does exhibit some secular underlying trend. At least in the historical US market. Designing a long-term trend-following strategy in an upward secular trending market appears as the way to go. And since the market does not go up all the time, you put in some protective measures for periods when the market is identified as declining. Just common sense stuff.

Some think that because they can't do it, others can't either. That is very sad and, (I will use a kind word), “shortsighted”. You will always find someone doing better, just as you can easily find people doing worse. Still, overall, you remain the one ultimately making all the decisions and a trading program is just a means to automate those decisions you might think are to your own benefit. You will win some and lose some. That is the promise of trading on quasi-market noise. If you want more than that, you will have to do more. That too is very simple. At least, you will need to do it differently.

I would reiterate what was said in the previous post:

“If you go for conventional trading methods, you should get what
should be expected: conventional results.“

It is your trading strategy that has to extract from the market every penny it is going to make. And the market will not make it easy for you. You will have to work for every penny. You will also lose or pay for every “mistake” you make. All you can do is make a bet based on whatever data you have and determine later if it was productive or not.

There is only one person that needs to be convinced of what your trading strategy can do, no other, and that is you. Without that, your strategy is not worth much if you do not even understand or have enough confidence in your own work to know what your strategy really does. But then, everybody has to make those trading choices and live by them.

All my modifications to this strategy deal with the payoff matrix and its equivalent functions. I make the program answer to these equations. Because of the time element, these functions are accepted as dynamic and chaotic. They will change over time the value of their controlling parameters at the market's whim or your own.

Some confuse over-fitting with leveraging. My version of this program uses leveraging. The amount of leverage used is printed on the equity charts. All 4 HTML files presented saw leveraging hover around 1.5x to 1.6x.

As was said in my last post, if \( \alpha – exp_t > 0\), it can become worthwhile leveraging a portfolio. As long as you cover more than the added leveraging expenses, your trading strategy can increase its overall return. What those 4 HTML files showed is just that: you increase the leveraging slightly, you should get higher returns. Otherwise, why would you ever even consider leveraging your portfolio in the first place?

Those 4 HTML notebooks mostly show that adding leverage to a positive alpha strategy can be a means to achieving higher profits. Evidently, there are added operational costs to it, just like in any other business.

The strategy does not demonstrate over-fitting. But it does show the tremendous impact leveraging can have on the final result.

Leveraging is an administrative decision. It is not your program which out of the blue will decide to use some. It is you coding its use and magnitude based on whatever data you consider relevant to the task. And with a 1.5x to 1.6x average gross leverage, the strategy is not in the over-leveraging business yet.

I do these simulations to answer the questions: How far can this trading strategy go? How much leverage can it take? Do I want to go that far? Can I accept that level of added risks and added expenses? Those questions can only be answered by doing those simulations and should be part of anyone's battery of tests for any “worthwhile” strategy. Otherwise, how could you ever know a strategy's full potential, strengths, and weaknesses?

Some might be interested in my last post in the following thread: https://www.quantopian.com/posts/what-i-have-seen-over-the-past-few-weeks#5f288fd14c72e60015da31c4 dated Aug. 3rd. It is a continuation of what has been previously presented here and elaborates on the inner workings of this modified trading strategy with yet another walk-forward (its fifth). Hope it can help some.

Managed to put out more explanations surrounding the above-cited strategy in the following post: https://www.quantopian.com/posts/what-i-have-seen-over-the-past-few-weeks#5f3ab8722471750011004c8a

The post puts the emphasis on the rebalancing portfolio's payoff matrix equation where we can make a reasonable estimate on the number of trades such a strategy can perform over its entire trading interval. It can even be used to make long-term projections on the expected value of n: \(E[n]\).

The post ends with the notion that the part of the payoff matrix equation that might have the most impact is the trading unit function u(t) which can be of your own design. It can be an expression of what you want to accomplish. Nonetheless, profits will be controlled by \(E[PT]\) which represents the ability to extract a real averaged positive edge from the gyration of price movements. And since \(E[n]\) will tend to be large over the years, one should look for anything that is statistically significant. There are a lot of solutions to the payoff matrix equation. Therefore, your solution can be as good as mine if not better.

I am reminded of Henry II, Plantagenet King of England. He had some good lines.

Here is my latest walk-forward, the 6th of its kind for this strategy. I used the same program version as illustrated in notebook 5 (see https://alphapowertrading.com/quantopian/Ranked_Selection_NB-5.html). I wanted to maintain the leverage at 1.60, just as in notebook 5. That simulation was from 2003-01-02 to 2020-06-11 while the new backtest ran from 2003-01-02 up to 2020-08-31 (total 210 months).

The portfolio metrics came out about the same. We can compare the captured metrics of notebook 8 to notebook 5. A prior version of the table below was published in Another Walk-Forward where results for notebooks 1 to 7 were first presented with some context.

Again, the strategy did not break down. The program continued to answer to the payoff matrix equation as illustrated in prior posts generating more money due mainly to the added time.

Here is the link to notebook 8:

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-8.html

Of note, since leveraging was used, an estimate of the costs incurred was recorded on the equity chart. That amount should be deducted from the final results. As can be observed, leveraging fees did not destroy the strategy, nor did the added time.

Guy. For the love of god, please stop talking. You've been told multiple times by members of the community and Quantopian employees that if you're not sharing code, your contributions are not welcome here.

Can somebody from Quantopian please ban him from talking in the forums or something, he does nothing but share screenshots of backtests he's created with associated maths that means nothing and just serves to make him feel smart.

As Jamie McCorriston and Joakim have now both said

@Guy Fleury: Multiple participants in this thread have expressed
frustration with the sharing of screenshots instead of attaching a
backtest. Please refrain from sharing screenshots built on top of the
shared work in this thread. You are entitled to keep your work
private, so if you don't want to share, that's fine. But please don't
share screenshots in this thread as it seems the intent of the thread
is to collaborate on improving the algorithm.

Guy is just a failed trader/investor/mathematician/statistician or whatever he wanna call himself. He is stuck in a loop where he thinks the work he has done years ago is still relevant. All he is doing is trying to put his message at the end of the famous thread so any new visitor will see his message which will drive traffic to his website. thats all he is doing. He do not have any substance whatsoever in his posts. My advise to anyone reading his posts - there is absolutely nothing in it. Save yourself some time and better read the post of people who actually do the real work here.

Hi Jamie Veitch!

Congratulations on your position as Head of Algorithmic Trading at KCL Asset Management.
I really want to see your real contributions to improve the "Quality Companies in the Uptrend" code,
but not a criticism of Guy R. Fleury the author of 8 books on algorithmic stock trading,
which is not the topic of this thread.

@James, every program we design or modify has objectives and constraints. There comes a point where giving away your code is simply ridiculous for the simple reason it has become valuable. And I think that most who do design more valuable programs than average are not always ready or willing to share them freely. What I see on Quantopian are often half-baked programs, but that is an opinion I try not to express when I see those.

I use math as an underlying rationale for my strategies. The math is relatively simple, it has been around for generations. There is nothing new in the math I use. Nonetheless, there appears to be some statistical advantage in the way I use it. My views of markets are more like Mr. Buffett's than anybody else. The reason is simple there too. If your trading strategy cannot survive over the long-term it is practically worthless.

You might have noticed that all the equations I have used have an equal sign, and that is not some kind of opinion. You should also have noticed that the equations I use have more than one solution. They have been given away freely. My simulations are simply a demonstration of their application in a portfolio setting. In a way validating their use. If the equations were of no value, my trading scripts would fail too.

All that I have presented, using the same Python tools as everybody else, and the same market data as everybody else is that it is possible and doable to do better than just hoping to reach some long-term market average.

Some do not like it that I do not share my programs. I can understand that. I have said many times before, I do not want to be responsible for the person losing money while misusing code I might have “shared”. Why should they not do their own homework as I do to fully understand what is being done? I presented in these forums a dozen different ways of doing it. And there are a lot more.

My article: https://alphapowertrading.com/index.php/2-uncategorised/357-the-making-of-a-stock-trading-strategy showed that just the stock selection process could have a very large number of possible outcomes. The case was made there could be more than \(>10^{400}\) possible combinations and that is a huge number. I only presented one of those possible outcomes using a highly modified version of the strategy in this thread. What I see is that anyone could make their own unique version, have similar results or better. It would demonstrate that each person trying to design their own would barely scratch the surface of possibilities and still make outstanding programs. Why should they not design their own strategies with their own understanding of how they view their market data?

When has any hedge fund given away their programs? Ask Jim Simons (Renaissance Technology) for their programs and see what they will answer? Simons achieved a long-term 66% CAGR, 39% net after fees. I am not there yet, but working on it. Or ask Steven Cohen (Point 72) if he would share his firm's programs? And while you are at it, why not ask for his best programs in order not to lose any time doing what should be your homework.

A trading program is just like a recipe. You make it public, it becomes free with no further claims to its IP. Another simple question: why would anyone pay for it when it is available free on the web? I learned my lesson in the past, it took only hours after sharing a program to see it misused. I am not going to do that again.

So, you have my point of view, everyone should do their homework.

@Guy

I understand your wish to keep your IP a secret, it should be noted that companies such as Point72 and Ren Tech tend not to spend their time spamming forums - which have been developed to encourage collaboration on code - with useless information designed to make themselves feel better about themselves.

They most definitely do not write paragraphs which bury the actual collaboration in between dense essays - on what I'm sure is interesting maths - which makes the collaboration and the following of the post harder on everyone else.

The best way to keep your IP secret and not have it misused is to not publicly talk about its contents. You seem to be wanting to live in the mid-ground where you want the praise that comes along with sharing IP in a public forum, whilst not actually sharing anything. Please do us all a favour and do what Ren Tech and Point72 do. Keep your thoughts behind closed doors.

@Jamie,

If you don't have anything to add to this model than your latest posts may be considered as
distraction of the topic in the direction of your own verbiage.
Please delete them and don't do that in future.

Currently working on some drawdown improvements (which are rather serious, but in other editions are less bad) I apologize for the mess of code since I've been trying different things, and just wanted some advice on what to further improve. Current editions reduce drawdown to around 30% at the expense of a slightly lower annual return. Some ideas that have worked are making lists to not buy stock immediately, combining IBD buy and sell days with raised stop losses. Looking at slightly different factors during different markets.

Guy. For the love of god, please stop talking. You've been told multiple times by members of the community and Quantopian employees that if you're not sharing code, your contributions are not welcome here.

Can somebody from Quantopian please ban him from talking in the forums or something, he does nothing but share screenshots of backtests he's created with associated maths that means nothing and just serves to make him feel smart.

I have always been puzzled by Mr Fleury. Perhaps he is just an algorithm and not a real person at all. Nobody has ever been able to get through to him on any level. He just talks.....and talks....and talks....To what end is a mystery.

@John

Quite interesting. But the leverage is often near 2. If you can keep the leverage <= 1, could be more true?

@thomas
You can uncomment the rebalancing part to make it closer to one, but I realized I had a pretty embarrassing typo that explains a lot of the previous difficulty I'd been having in lowering drawdowns. Will have to go through previous backtests and see if fixing this typo improves returns.

@john tzu

The same algo as above with only one line commented

### set_slippage(slippage.FixedSlippage(spread = 0.0)) # why cheat yourself?  

and added

def before_trading_start(context, data):  
    record(leverage = context.account.leverage, pos_count = len(context.portfolio.positions))  

to visualize leverage.

Too many unused variables and names.

The performance may be better if you lower initial capital.

Move trading closer to open.

I see the code not controlling the leverage properly.
The best way to fix that execute all orders except stop loss in one place "trade".

@vladimir Yes, I've been trialing a lot of different ideas, so I haven't yet cleaned it up. Performance does significantly improve with lowering of capital, but I suppose since it has high turnover and does entries around the close each day, a better way to deal with slippage would be to use something like what Fleury once mentioned and buy in several batches based on some signal. For higher initial capital, (when accounting for slippage), would require a different method to decide entries.

@John, I second @Vladimir's observations. Commenting out the no-slippage greatly reduced the strategy's performance. In fact, the overall total return was reduced by 99%! We can ignore slippage in a backtest, no problem. It can help give us an idea of the built-in alpha.

However, the market will not be so compliant. Also, percent slippage is a small number. And this would imply that most of the return came from not considering it.

I could not make all the tests I wanted since the backtest analysis would not complete (that is in a way frustrating). I wanted the outcome of “ round_trips=True ”. Had to reduce the time interval in order to get an idea on the direction of those numbers. But for me, that is counterproductive since my interest is in the long term survival of a trading strategy.

Nonetheless, I would add to @Vladimir's comments that the strategy is “delicate”. For instance, raising the number of stocks to 50 in order to reduce the bet size and volatility blew the portfolio up. Not by a small measure. Overall return was -3,331% with a drawdown of -1,121%! That is 11.21 times the initial $10M stake. So, there might also be some work to do there. I did not try to raise the number of stocks any higher.

With the no-slippage on, and the small number of stocks, your program version gets to trade millions of shares in the last 30 minutes of the day. A lot of orders get partially filled. That might not be that realistic using market orders. However, in real life, you would be able to spread that volume over a few days or weeks. It would change the way you will have to look at moving prices and fills. First, may I suggest moving your trading decisions to the beginning of the day in order to give more time for trades to fill.

@guy
The slippage problem is somewhat reduced when I use 50 stocks, since as you said, the lot size decreases.
I did not have the same problem you did upon raising stocks to 50, but I think this might be because my code was rather confusing and requires you to change the stop loss function as well since that is affected by the number of stocks sold. I have moved my trading decisions. to the beginning of the day per your suggestion.
Attached is a version that has allowed you to change the two variables context.Top_n_roe_to_buy and context.Target_securities_to_buy to change the top quality stocks and the number to buy, respectively. As would be expected, choosing slightly less quality stocks (by increasing these numbers) decreases returns.

I also find out, when the leverage is controlled <=1, the total return is just about 6000%. But this is more reality.

Higher leverage will create huge total return. Is this the effect of Einstein's 8th world wonder? :-)

@Thomas Oh, could you tell me the max drawdown? If drawdowns decreased to say 20% and it still had a 6k% return (about 27% annually), that's still significant outperformance.

here it is. The drawdown is < 22%, not bad. :-)

I am not sure if I can say this strategy's annual return is about 27%. One can see there is a huge jump (I call it Donald Jump or Fed Jump?)) from 2020. Such could happen one time in 100 years.

@Thomas

One third of the gain coming from this line:

 set_slippage(slippage.FixedSlippage(spread = 0.0))   # why cheat yourself?  

@Vladimir
You are right. If the slippage is concerned, the total return will be reduced from about 6400% to about 4500%.

@john tzu
What does the name of the function 'dg' mean?

ROE can be a poor quality metric. If ROE is poor, there's a good chance the company's quality is poor but if it is "good" that doesn't mean the company's quality is "good". The reason is that ROE can be heavily skewed by various accounting issues such as stock repurchases, asset write-offs, and high debt. Stock repurchases lower the amount of equity (the denominator) which can make ROE improve even if earnings are the same as last year. If a company shuts down a plant or decides an acquisition they made is worth less, they will reduce the value on their balance sheet by lowering the value of the asset AND the total equity in the company. That's hardly an indication of quality but it results in higher ROE. And lastly, higher debt magnifies ROE. A company can only be financed by two types of capital: debt and equity. If the company chooses to issue a ton of debt to finance acquisitions or buy back stock, the total capital of the company will be dominated by debt rather than equity and ROE will increase. It's more difficult, but if you can divide operating earnings (pre tax) by assets minus cash you'll get a cleaner number. ROA is a nice metric but it suffers from a similar problem because it includes cash. If I'm comparing two companies in the same industry, I want to know how about the returns they get on their operating assets (factories, offices, etc.) not on their cash balances. Many tech companies have huge piles of cash so it's really important to adjust for that issue.

Building Blocks For Your Stock Portfolio

No one seems to be much concerned by the stock selection process used when it has a major role to play over the long term. First, let's set “long term” as 15 years or more. I would prefer 20-30+ years, but we do not have that much data available.

I will use Dan Whitnable's version of the initial program as highly representative of the various versions I looked at. I could have used other versions, but I do like Dan's programming style.

A Trading Strategy

The strategy has a “self-proclaimed” definition for “quality” and “trend”. The trend is up if the SPY 50-day SMA is higher than the 200 SMA. The trend is down otherwise. A simple SMA crossover triggering system used by many of the program versions. Whereas quality was set as the sum of the ranked scores of stable earnings, strong balance sheets (low debt), high profitability, high earnings growth, and high margins. Conditionals were applied to extract stocks from the selected stock universe. The portfolio was then scheduled for periodic rebalancing.

We assume it is all good and do not even question the premises since they all sound logical and reasonable. But, are they really?

Redesigning It

I questioned everything in this strategy, even the stock universe used, to gain a better understanding of its structure and all its nuances. I ended up modifying every important aspect of it: changed premises, initial assumptions, stock selection method, trading logic, goals, and methods of play. Added modulated leveraging with boosters and amplifiers where I thought they should apply as general rule, that they be mostly profitable, even if at times not.

Trading, by its very nature, and due to the high number of executed trades (>100,000), becomes more a statistical undertaking where playing unknown and developing averages tend to dominate the scene. The most important of them is the average net profit per trade \(x_{avg}\). Evidently, you end up with a profit if, and only if, \(x_{avg} > 0\).

I like designing trading scripts and push them beyond their limits (using whatever) in order to scale them back later knowing that they could go further if wanted. All I can do is make a simulation of those trading principles and rationales to see if it would have worked on past data, and maybe from the simulation make estimates of how much such strategies might produce going forward. Like in many other training and/or forecasting endeavors.

Simulations, once done, need the test of time as corroboration going forward. It might take years to show a strategy's live merit. It is why we do simulations in the first place. We want to know now if the designed contraptions have value.

Understanding Better

Simulations and their variants can provide a better understanding of the trading logic and interactions between all the variables having some impact on a strategy's outcome. The future will be different, we all know that. But still, a program is code and it will execute its commands. Will the market be the same going forward? Absolutely not, but your program might be the same if not modified along the way!

We are all entitled to trade whichever stock is out there using whatever trading methods we have. Therefore, again, why choose those stocks? Why choose that particular stock universe? Why force that particular selection when there are trillions and trillions of other possibilities? There are even more basic questions to ask.

The Portfolio's Payoff Matrix

The point I raised using the portfolio's payoff matrix equation is that there IS an equation for a rebalancing portfolio:

\( F(t) = F_0 + \displaystyle{ E[n ∙ \frac{\Sigma (H ∙ \Delta P)}{n}]} = F_0 + y ∙ rb ∙ j ∙ E[tr] ∙ u(t) ∙ E[PT] \)

What is the average long-term probability that the expected average net profit will help you exceed your long-term objective \(P[E[n] ∙ x_{avg} > z]\)? And a corollary, how could you improve on z? But first, let's look at the stock selection process itself.

In the above payoff matrix, \(\Delta P\) is simply the price difference matrix of your selected tradable stock universe. It has for origin the price matrix \(P\) which is the same for everyone. If there is one thing in the payoff matrix that you do not control, it is the price matrix. Either of the following stock universes could have been used: “Q500US, Q1500US, QtradableStocksUS, Q3000US, USEquityPricing”, and each would have provided a different answer to the applied code.

Trade Timing

The timing of most trades and their respective quantities would have been different simply due to the selected universe. The outcome would even change if you changed the starting date by a month, a week, a day, or even by an hour for that matter. As you would increase the size of the stock universe, the ranked set of selected stocks would not be identical at each rebalancing, nor would their actual rankings. Yet, the strategy's trading logic would be the same.

In my case, it would have taken the 400 highest ranked momentum stocks of the lot to which would be applied my tailored trading unit function, a variant of the multitude of other possibilities. This says that everyone could have a different solution to the problem and win. So, my suggestion would be: design your own and do not make it public. At the very least, do not give away your best solutions. My intent is to help you design your own solution and not to give you mine. Without understanding what you will do and why what level of confidence would you give your trading strategy solution and what would be its real worth?

Rebalancing

In previous notes, I emphasized that we could make an estimate as to the number of trades a rebalancing strategy might make over the years. Its formula was simple: \( E[n] = y ∙ rb ∙ j ∙ E[tr]\), where \(y\) was the number of years the strategy would run, \(rb\) the number of rebalance per year, \(j\) the number of stocks in the portfolio, and \(E[tr]\) the expected turnover rate. The bet sizing function was defined as \( u(t) = \displaystyle{\frac{F(t)}{j} = \frac{ F_0 + \Sigma (H ∙ \Delta P) }{j} } \) with \(F_0\) the initial capital.

What does it all say? It says that initially, all you have is your initial capital that could be moved around according to the ongoing value of the trading unit function \(u(t)\) and that this function is subject to your ability to generate profits. In fact, generating alpha is totally dependent on your trading strategy H. The higher the average net profit per trade, the better since your portfolio is highly dependent on it: \( F(t) = F_0 + y ∙ rb ∙ j ∙ E[tr] ∙ u(t) ∙ E[PT] = F_0 + E[n] ∙ x_{avg}\).

Moreover, by increasing the number of years you manage a portfolio, the higher the expected outcome. You increase the number of stocks in the portfolio and you should expect higher profits, but, this move also decreases the bet size. You increase the number of rebalances, and it should improve on performance, but this would tend to reduce the average percent profit per trade. As if there was always some kind of trade-off to be had.

Seeking Volatility

Nonetheless, here is an interesting observation. In a rebalancing portfolio such as this one, and keeping all other things equal, that you increase the number of stocks to be traded will have little impact since the bet size will be reduced proportionally \( u(t) = \displaystyle{\frac{F(t)}{j} } \). For instance, doubling the number of stocks will half the bet size: \( F(t) = F_0 + y ∙ rb ∙ (2 ∙ j) ∙ E[tr] ∙ \displaystyle{ \frac{u(t)}{2}} ∙ E[PT] = F_0 + 2 ∙ E[n] ∙ \frac{x_{avg}}{2}\). Nevertheless, the move would tend to reduce the portfolio's volatility simply due to the smaller bet size.

Also, the quest to reduce volatility at all costs might not be the best route for a trader. Volatility should be sought since
\( E[PT] = \displaystyle{E[\frac{\Delta_i p_i}{ p_i} ] = E_{avg}[r_i]} \). This leads to the notion of an average return on the bets made since each bet was initially designed to be the same, a fixed fraction of equity (0.25% ∙ F(t)). Therefore, what should be maximized is: \( {max}\, E_{avg}[r_i]\).

One of the easiest ways of doing this, with little effort, is to give your trades more time in order to give them more time to appreciate.

As Jesse Livermore once said: “It was never my thinking that made the big money for me. It was always my sitting.” Turns out it was pretty good advice since there will be a lot of sitting going forward.

I have noticed that the fundamental data used in these algorithms is frequently out of date. Consider the attached notebook. This shows that as of the end of May 2020, the long term debt to equity ratio being used for ABBV was dated back to March 2018.

I am not sure if the problem always lies with Quantopian or Morningstar. I have noticed data is sometimes missing from Morningstars actual website, but it could be possible on other occasions that the correct data is just not available in Quantopian.

It is preferable to filter out outdated data as invariably it is going to skew the results in random ways. Using only the stocks where recent data is available also ensures the algorithm is more reflective of what would happen in a live trading environment.

Old data can be filtered out using the BusinessDaysSincePreviousEvent function. This can be used in code as follows:

from quantopian.pipeline.factors      import SimpleMovingAverage, Returns, BusinessDaysSincePreviousEvent  
# only data from within the last 12 weeks  
OLD_DATA_FILTER = 61  
def make_pipeline():  
    # Base universe set to the Q500US  
    universe = Q500US()  
    m = universe  
    roic_asofd = BusinessDaysSincePreviousEvent(inputs=[Fundamentals.roic_asof_date.latest]) <= OLD_DATA_FILTER  
    ltd_to_eq_asofd = BusinessDaysSincePreviousEvent(inputs=[Fundamentals.long_term_debt_equity_ratio_asof_date.latest]) <= OLD_DATA_FILTER  
    cash_return_asofd = BusinessDaysSincePreviousEvent(inputs=[Fundamentals.cash_return_asof_date.latest]) <= OLD_DATA_FILTER  
    fcf_yield_asofd = BusinessDaysSincePreviousEvent(inputs=[Fundamentals.fcf_yield_asof_date.latest]) <= OLD_DATA_FILTER 

    m &= roic_asofd  
    m &= ltd_to_eq_asofd  
    m &= cash_return_asofd  
    m &= fcf_yield_asofd  

This has a noticeable affect on the behaviour of the algorithm. I can only attach one notebook or backtest per post, so I'll make follow up posts with before and after tests.

This is the "before" test, using whatever is listed as the "latest" available fundamental data ratios.

And this is "After".

I've only executed this over the last 5 years here just to demo the effect it has on the outcome.

Hello,
I am relatively new to Quantopian, so please be patient for novice type questions.
I have read through this long thread and seen there are two themes:
• How to address glitches in the code for leverage and order execution and holdings • How to increase the performance of the algorithm. I cannot contribute on the first point. Indeed, I lost my way with regards to the modifications to correct the bugs, so I have worked on a version of the original and 2nd version of the code.
I see performance has been improved by increasing the stock universe or adding several additional combined factors or reducing the number of stocks held.
However, I have seemingly managed this with a much simpler adjustment.
I experimented with PE. Firstly in place of ROE, which reduced returns significantly. Secondly, PE in combination with PE.
Results:
ROE (baseline) - 1521.59 %
PE - 726.33 %
ROE + PE - 1912.45 %
I do not know how you guys are including the back tests in the posts, but the only change in the code of the original model is:
roe = Fundamentals.roe.latest.rank(mask=universe)
pe = Fundamentals.pe_ratio.latest.rank(ascending=False, mask=universe)
pipe = Pipeline(columns={'roe': roe+pe},screen=universe)

I have several questions:
• I do not have enough statistics to determine if the return performance is statistically significant or know which report to look at to tell me, are there any pointers here?

• The performance comes with a slight increase in drawdown (-25.21% vs 24.22%) and Sharpe ratio (0.75 vs 1.05). I do not know how to analysis these vs the better performance. Could anyone evaluate this for me? • I do not understand the reported sharp ratios. There seems to be an overriding one (-25.21%) as stated here, but also one under the performance tab, reporting 1.12 (vs 1.17, so seemingly better for roe+pe) in this case. Why 2 of them and why are they different? • How do you include back tests in the posts?

Russell

@Russell

... I do not know how you guys are including the back tests in the posts, ...

Quite simple:
click Attach, choose Backtest, select your algo and the backtest number. That's it.

Or you could simply paste your code gere. I can do the backtesting and attch it here. After that you can delete your pasted code.

Some added notions to my previous post where I said I questioned every assumption in the original version of this program even if I used Dan Whitnable's version as starting point.

One of those assumptions was the fixed and generalized trend declaration which can have a major impact on trade dynamics.

When the program declares the trend as up, it becomes evident what the program should do next. It is relatively simple to declare something that will be seen as a trend up or down for a strategy.

The trading logic and methods used should be consistent with the trend definition used. It is not what the market is necessarily saying, it is us, as strategy designers, making that declaration. It is based on our understanding of the game we want to play or based on the simple assumption that we might need some kind of trend definition, if at all. Whatever is used, our programs will have to live with it, and indirectly, with our parameter choices, our own generalized market structure assumptions, that they be right or wrong.

In this case, an uptrend was originally defined as the SPY's 50-day SMA being above its 200-day SMA. Nobody seems to have questioned the validity of using that particular SMA crossover trend-following definition. I understand. It is an easy solution and has been around for decades as if part of some market folklore. But, shouldn't we dig further than that? Shouldn't we first demonstrate that it is indeed part of the best trend definitions out there?

If you applied, for example, the above trend definition over the last year, (and in many other flat or volatile periods), you might have noticed that its overall timing was not that great. For instance, during the course of the recent pandemic, one could say: it was rather late to the party. And as such, that “trend definition” would have turned out to be quite expensive. The following chart illustrates this point.

The SMA negative crossover. declaring the trend as down, happened after the recovery was already underway and the positive SMA crossover occurred 3.5 months after the low.

The strategy would have lost on the major part of the decline, and then, would have lost again on a big chunk of the recovery.

Early on when modifying this program, I changed the trend definition to something more responsive to price variations. It served the strategy well as time progressed, especially in the subsequent walk-forwards (see from last January's simulations onward for instance:
https://alphapowertrading.com/index.php/2-uncategorised/354-financing-your-stock-trading-strategy )

This turned out to be a worthwhile move since during the pandemic the portfolio surged upward more than initially expected. As said before, the program was transformed to seek volatility and the pandemic did provide it. The program had no knowledge of what a pandemic was nor did it see it coming. But with a more sensitive trend definition, it handled the added volatility just as it had during the financial crisis.

You define a trend as part of a portfolio's protective measures. If the uptrend breaks down, the first intent is to get out of harm's way. In my version of the program, the portfolio went short, according to its own trend definition instead of going to cash or bonds.

Nevertheless, the strategy was still operating mostly on market noise. It is just that there was a lot of noise at times. And being more sensitive to its trend definition, the program easily switched from long to short even if there were more false positives that also had a cost. It turned out quite well as illustrated in the 6 successive walk-forwards presented.

A trend definition dictates what your trading strategy should do while it is defined as up or down. It is a binary choice. If it is ill-defined, not sensitive enough, or suffers from too much lag as in the original program version, your portfolio will definitely pay the price. But, as said many times before, it is your choice, you decide what is best for you.

Thanks - I have attached the backtest here.
Russell

Hello,
There was discussion back in Nov 19 that orders were placed for stocks that could not be traded and leverage went up as a result. I am not sure if/how that was resolved. Anyway, I seem to have found my own solution using the data.can_trade() function. Example snip for sell side would be
if (data.can_trade(x)==True):
order_target_percent(x, 0)
else:
print('CANNOT GET OUT OF',x)
I do not get any system WARN messages in the attached backtest and only see my own messages saying I cannot get out of stock (but I have not placed the order).

@Guy What do you use as trend definition? As this is the part where this strategy is not so well versed in.

@Peter, my trend definition is rather intricate and weaved into the code. For me, it seems a reasonable solution for what I intend a stock trading strategy to do over the long term. I have been working with the portfolio payoff matrix equation and its ramifications for years.

As you know, I do not give out code. However, I can explain the extensive modifications made to this program and for what purpose.

Technically, it does use a relatively short-term trend definition but within a medium to long-term trend. Some overrides are allowed meaning that other sections of code can change the direction of that loosely defined trend. Therefore, it is not a one size fits all hard-coded definition. It has quite fuzzy boundaries that at times can be triggered even by market noise.

When you put this in context with the amplifiers, boosters, and the use of modulated leveraged, then that “trend” is of major concern, ill-defined or not, since what you apply over it will dominate the trading function. Nonetheless, there is an underlying trend that is considered, but most of it is injected, designer made. It does not come from the data but from the preset objectives as my attempt to control the portfolio's payoff matrix equation.

This highly relates to my trading unit function u(t) which itself is compounding. This function is injected and made proportional to the number of stocks in the portfolio by design: \(u(t) = \frac{F_0 ∙ (1 + \gamma _{avg})^t}{j}\). Evidently, this “gamma” function is made positive: \(\gamma _{avg} > 0\). It can be under your control which makes it very different from the conventional fixed betting functions we use, especially if it is leverage modulated with those amplifiers and boosters. The strategy can operate with market noise and low-predictability because most of the trend it considers is “fabricated” and correlated to a portfolio's long-term objectives.

The result is that I am the one injecting an upward trend into the betting system with the use of the gamma function as described. Such a formula has its own built-in feedback control, it is compounding and can increase one's alpha over the trading interval. A small positive change in this gamma function \(\Delta \gamma _{avg}\) will push performance higher due to its long-term compounding.

For me, all that stays in line with how I consider long-term trading strategies. Over the years, I've designed trading strategies that used hard trend definitions, some with fuzzy trend definitions, and some with none at all. At times, I've used a few technical indicators to many to none at all. I've also used random entries from a low percent to medium to high to totally random. One of my preferred strategies has totally randomly generated entries and exits. They all worked, mainly due to the trend betting injections (their respective gamma functions).

The portfolio payoff matrix equation has gazillions of solutions. It is up to us to pick the ones that satisfy our own way of thinking. No matter how you want to slice the game, you can find a way to make it work for you based on how you think that game should be played over the long term. My own market views are more Buffett-like than anything else. Time is a big requirement in showing the power of long-term compounding, but to show outstanding results will also require our ability to generate some alpha.

There are literally millions of us trying to find solutions to a portfolio's long-term payoff matrix equation and it is not by doing the same thing as everyone else that you will make the result that different from theirs, it is by innovating or reengineering trading methods if needed.

If you do not design your trading strategy for the long term, how will you know that your portfolio will survive? At least, realistically simulating your strategy, you can know if it would have survived over past market data. The future might still remain an unknown, but that will not change the portfolio's payoff matrix equation.

Hope it helps.

Hi folks,

I have followed the entire thread since the very first message and here comes my contribution.
Here's a list of the element characterising the strategy:

  1. I have kept a very conservative and fearful approach: no use of futures, no leverage above 1.0.
  2. The returns are good, but not the best. Same thing can be said about the Sharpe Ratio.
  3. The max drawdown is high -33.43%, due to the dip in March this year.
  4. The factors composing the quality (renamed value) indicator are: return on invested capital, debt-to-equity ratio, free cash flow / enterprise value, free cash flow / number of shares / share value and ebitda. All are metrics to assess the financial performances of a stock.

All in all the algorithm works fine and it is a good starting point. I'd like to bring it live, so I might ping those of you have done it already!

Nevertheless there are some things bugging me, that I hope you can help me solving:

  • The variable for selecting the number of securities to buy is context.TARGET_SECURITIES. Despite being set to 5, I noticed that for long period of times the portfolio positions are well above 5. What's wrong in the functions select_stocks_and_set_weights and trade?
  • There is a variable called context.TARGET_WEIGHT_EQUAL that allows switching from equal weight to weight based on the top_value_momentum variable. I tried it both True and False, but I see no differences in the result. Could you help identify why?

Cheers!

@Matthieu
The part starting in line 202 answers your first question

    context.daily_weights.append(total_weights)  
    if len(context.daily_weights) > context.HOLD_DAYS:  
        context.daily_weights.pop(0)  
    ## Average the rolling portfolios  
    combined_weights = {}  
    for weights in context.daily_weights:  
        for s, w in weights.items():  
            if s not in combined_weights:  
                combined_weights[s] = 0 # initialize  
            combined_weights[s] += w / len(context.daily_weights)  

It stores the current weights in a list and if they are not older than HOLD_DAYS days they get added to the new ones.

The reason why TARGET_WEIGHT_EQUAL doesn't do anything is, that top_value_momentum is a boolean value - True or False. When used in a calculation this becomes 1 or 0. So this part of the calculation in line 181

stocks_value.top_value_momentum[s] / stocks_value.top_value_momentum.sum() * len(stocks_value)  

translated into numbers this would be
1 / 5 * 5
= 1.

I've added an additional column with the actual momentum value to the pipeline and used this instead of the boolean for calculating the weights.

I'd like to bring it live, so I might ping those of you have done it already!

only 5 positions and you only need to rebalance once a week or once a month because these factors are slow moving and have negligible decay.. the easiest thing is manual execution. Really no advantage to automating such a small and slow strategy.

Just be aware of all the overfitting going on here.. don't expect future returns/risk to be as good.

@Tentor thanks for the catch. That was a poor bug fix.

I have fixed the weight factor. Attached you can find the updated backtest.
It does not present a major improvement!

@Viridian thanks for the suggestion and the heads-up.

It might indeed be overkilling it. I am really eager to delegate all the operations & decision making process regarding my portfolio.
I have an additional question for you. I have bluntly copied the following code from a Source Code that you have shared on Dec 11, 2019.

context.daily_weights.append(total_weights)  
    if len(context.daily_weights) > context.HOLD_DAYS:  
        context.daily_weights.pop(0)  
    ## Average the rolling portfolios  
    combined_weights = {}  
    for weights in context.daily_weights:  
        for s, w in weights.items():  
            if s not in combined_weights:  
                combined_weights[s] = 0 # initialize  
            combined_weights[s] += w / len(context.daily_weights)  

The way I have understand it, the lines above shall enable the rebalance of the portfolio depending on given context.HOLD_DAYS.
But it appears that the rebalance is carried almost daily. What was your original idea?

@Matthieu Crétier - That bit of code implements a rolling portfolio. While it does rebalance each day, the target weights are a combination of the current day's weights, the previous day's weights, the day before that's weights, etc. So lets say pipeline only returns 5 positions, this allows you to hold more than 5 positions because you hold onto old positions longer, so you get more diversification without having to dig deeper into weaker alpha signals. It also smooths turnover. For this strategy though I think it doesn't really add much value. I would just leave it out and rebalance less often instead.

Hello,

I spent a lot of time learning this study, but I have noted is that 1,500% cumulative profits are published as a great achievement, but if we think realistically and do the test for one year, we will find that the profit for one year is less than the bank interest and if it is slightly higher than the interest The bank does not mean anything in return for the risk

If my opinion is wrong, please correct me

Thanks

Amazing returns = superior stock selection strategy + superior in & out strategy
In the forum thread New Strategy — “In & Out” we are discussing the idea that the total returns that we can generate are the result of a great strategy regarding ‘what stocks to buy’ (e.g. “Quality companies in an uptrend”) and a clever timing (“In & out strategy”) regarding when we are ‘in’ the market and hold the stocks versus when we are ‘out’ of the market and hold alternative assets (e.g., bonds). The timing is derived from early economic signals (see the forum thread for details). While in the thread, we only focus on the in & out side of things, I wanted to give the combination a shot here.

SEL[“Qual Up”] + I/O[“Death Cross”]
The “Quality Companies in an Uptrend” strategy has an integrated in & out component, which is the ‘death cross’, i.e. the short-term moving average in the SPY breaking below the long-term moving average. The ‘death cross’ is popular but has issues such as that it only provides a lagged ‘out’ signal, i.e. when a drop already has occurred (see Tentor Testivis’s comment and the corresponding test in the “In & Out” thread; search for ‘death cross’).
The combined strategy’s (stock selection = “Qual Up” and in & out = “Death Cross”) total return between 1 Jan 2008 and 2 Oct 2020 is 452.81%.

SEL[“Qual Up”] + I/O[“In & Out”]
Attached is a backtest for the same period (1 Jan 2008 to 2 Oct 2020) that combines the stock selection side of the “Quality in an Uptrend” strategy (“Qual Up”) with a recent version of the “In & Out” strategy. Compared with the “Death Cross” in & out strategy, additional returns can be realized.

Implication
It can be worthwhile to separately optimize the stock selection strategy and the in & out strategy. Of course, the ultimate results will also be driven by a certain synergy or dissonance between the two components.

Limitations
SEL[“Qual Up”] + I/O[“Death Cross”] trades monthly, while SEL[“Qual Up”] + I/O[“In & Out”] trades weekly due to how the In & Out strategy is defined.
SEL[“Qual Up”] + I/O[“In & Out”] sells all stocks when the indicator is ‘out’, while SEL[“Qual Up”] + I/O[“Death Cross”] continues to hold certain stocks although the death cross indicates ‘out’.
I/O[“In & Out”] is based on a set of ETFs to derive important price signals, creating a lower boundary regarding the date that a backtest can start from. When testing periods before 1 Jan 2008, make sure that all the ETF prices are indeed available.
(hopefully no substantial errors in the code, otherwise, if someone could post an updated backtest that would be greatly appreciated)

Where to from here? Quest of the Grail
I suppose the ultimate objective is let's get rich together. A means to achieve this objective, I reckon, is further engagement in five key areas:
1. improve existing stock selection strategies (SEL*)
2. find additional amazing stock selection strategies (+SEL)
3. improve existing in & out strategies (I/O*)
4. find additional clever in & out strategies (+I/O)
5. find optimal SEL + I/O combinations

@Peter,

Here is a backtest of your algo with Chris Cain's original quality definition.

quality = ms.roe.latest.rank(mask = universe).  

Pretty good.

Here is the exact same logic but without using all the pipeline acrobatics to get the job done. Thanks to all who contributed.

/L

Here is another version of my code above, but with a different momentum logic. It cannot be considered "quality" companies, I cannot take credit for the pipeline logic, it comes from "Volatility time momentum algo"

/L

@Peter, Vladimir, please find some tweaks building on the version of Vladimir from 4 days ago with the goal of increasing Return and Sharpe:

  • working with Q1500US and the top 150 to have a broader range for quality & momentum
  • creating a compound 'quality': quality = (roic*.25 + roe*.27 + value*.20 + ltd_to_eq*.13).rank()
  • adjusting the 'momentum' rank for assests with high volatility to improve sharpe: adj_momentum = (momentum-.03)/(1+volatility*0.3)
  • adding 10% 'GLD' to the bonds to better balance when out of market

Peter, great move to include the “In & Out” here. Another addition could be to include "mean returns of SPY < mean returns of TLT or IEF" into the bear filter to be even more adaptive to regime changes.

Have fun!

Hello, guys

I would like to make an impulse portfolio from 1998 to 2020 from US stocks (SP500 or NASDAQ).

testing rules

1 if VBMFX etf grows more than TBIL in 40 days, then turn on the risk mode on it

in risk mode, when enabled, we select n stocks (20-30 quontity) that performed better in the last 40/80 and 160 days of trading (in equal proportions) - better than SPY and QQQ

shares with capitalization over $ 1 billion

rsi14 for shares within 55-75

plus some fundamental indicators (but I would like to disable them)

2 if VBMFX grows less than SHY in 40 days, then we go in defense - TLT or IEF

please help with testing!

I attach a basic backtest, I would like to add these conditions there

my thoughts are based on the following back test https://bit.ly/31SXgeP ( in PortfolioVisualizer) (cagr 17, max draw down - 16)

I would like to replace ETF with shares and, with the help of an expert, reduce the risk

moreover, I noticed that all current models behave badly in 18-20 years. the model on the link from the PortfolioVisualizer, on the contrary, performed well at 18-20 years

I would like to try to include ideas from there in our backtest

@ Luc
just searched for Hidden Markov Model to switch on/off strategies...like in this example the bull signal
I found your old post, looked promising and tried it.
Are you still using HMM? What is your experience? could it be used in alternative to this bull signal
It would like to use at least 3 phases, one for moment trades, one for mean reversing and maybe one to try for shorting.
If one uses the whole time series it provides good results, but if one just uses 1000-2000 days past data and try to determine the present stage it’s not that clear anymore.
Some ideas were to look?

Hello Quants,
New here but with experience in programming/machine learning. I stumbled across Chris's whitepaper: https://cmtassociation.org/association/awards/charles-h-dow-award/ as the 2020 winner. I noticed the original algo posted here has gone through a good number of tweaks before ending up at the finished product, namely:
Summary of the rules:
1) We start with a universe of the 500 most liquid US Stocks. This is Quantopian’s
“Q500US” universe. This universe is derived by taking the 500 US stocks with the largest average 200-day dollar volume, reconstituted monthly, capped at 30% of the equities
derived from one sector.
2) We then rank our stocks 1-500, based on the quality, value and low volatility factors.
The stock ranked 500 would be the most attractive of these attributes and the stock
ranked 1 would be the least attractive.
a. Quality - Rank stocks by ROIC, the higher the better
b. Value - Rank stocks by EBIT/EV, the higher the better
c. Volatility - Rank stocks by trailing 100-day standard deviation, the lower the better
19
Quantamentals
3) Add up the three rankings and take the top decile. We are now left with 50 stocks
that have a combination of high quality, low valuation, and low volatility.
4) Of our 50 stocks, take the top 20 based on cross-sectional momentum. This is
measured by stocks with the highest 6-month total return, skipping the last month.
5) Every month we rebalance our portfolio, selling any stock we currently hold that
didn’t make the top 20 list based on the logic above and buying stocks that have
since made the list. Stocks are equally weighted.
6) We only take new entries if our time-series momentum regime filter is passed.
For our time-series momentum regime filter, we simply use SPY’s price compared
to its 100-day moving average. If the price of SPY is above its 100-day moving
average, we take new entries. If the price of SPY is below its 100-day moving
average, no new entries are taken.
7) Any capital not allocated to stocks gets allocated to SHY (1-3yr US Treasuries).
8) Assumptions include a beginning portfolio balance of $1,000,000 and commissions
of $0.005 per share with a minimum trade ticket cost of $1. This models the real-life
commission schedule of Interactive Brokers.
Does anyone have the source code for this algo? I think I can make improvements on the strategy side, but would rather not go through the process of alterations to get it from original to finished beforehand. Many thanks in advance.

check out www.cloudquant.com or reach out to me to talk about our platform: [email protected]

Anyone has a copy of the code for this algorithm? want to download it. Thanks!