Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
What I Have Seen Over The Past Few Weeks

Most of the stocks have become restricted. At least, that is what I saw on my IB stocks of interest watchlist. Programs depending on shorts, like market-neutral trading scripts, would not have found that many shortable trades due to all the restrictions. In a period such as this, leveraging fees would be higher due to the rarity of borrowable shares. And furthermore, maintenance margins were also reduced making it even more difficult to stay long.

Rebalancing on a monthly or quarterly basis might have missed the most recent 3 to 4 weeks. Again rendering a lot of strategies helpless and having to endure those drawdowns. Drawdowns that they had planned to minimize with their code but became ineffective stop-loss procedures that have had no time to be applied or triggered. Thereby, bypassing even those basic protective measures.

It is only if you were short before this crisis when shorts were widely available and much riskier that you could have suffered lesser drawdowns. But, then it would also depend on their severity, by how much your longs would be punished. Would your program, for some reason, have switched to cash prior to the crisis and on what rationale?

Long only strategies, if using weekly rebalancing would also have been clobbered. Especially if they relied on quarterly fundamental data and using the optimizer to make their trades. They would still be operating on December data which did not show any trace or any hint of a pandemic.

Shorts had the up-tick rule restriction reinstated. But that applied only on the retail side of the ledger. Market makers had no such restriction. I do think that sub-penny trades were still allowed for big players and market makers, but not for retail traders. SHO rules still allowed naked shorts for market makers.

But whatever, this whole market now presents tremendous opportunities going forward. We all know it will all bounce back. It is just a question of time, but it will get there eventually just as it did in the past after such black swans.

26 responses

I agree that market presents tremendous opportunities now and moving forward.
However, my view is that this flock of black swans just triggered new great recession and possibly even a depression and that it could take many years to get again to ATH and not few weeks or months like it did so far.
Entire world is in lockdown and people are scared.
The only fast new ATH I see is inflated one if FED decides to create double digit trillions out of nothing and pump it into economy combined with potentially found coronavirus cure.
Even with that it would take quite some time for economy to start moving at full speed again.
Also, this crazy recent volatility is perfect playground for many subsequent dead cat bounces.

@Vedran, my point was that of the published trading strategies I have seen on Quantopian, none were ready or programmed to adequately face this fast and general collapse in stock prices. None of the fundamental data (especially if it was held back one year) could foresee what was coming.

When designing protective measures, there is definitely a need to code for such possible phenomena even if they do not happen often, because when they do, they can do quite a lot of damage.

It is our task as strategy designers to code not only for optimal profits but also for optimal capital preservation. Even if such a measure is simply stepping to the sideline.

@ Guy ,

of the published trading strategies I have seen on Quantopian, none were ready or programmed to adequately face this fast and general collapse in stock prices.
Do not agree.
There are thousands balanced, long only strategies published on Quantopian forum which are making new high today.
Like this one, we discussed with Yulia Malitskaia in "Quantopian-Based Paper on Momentum with Volatility Timing".

@Vladimir, great.

Yeah, just as @Vladimir pointed out. Many of our long short algos are having time of their life. Just take a look at daily contest leaderboard.

@Vladimir, made a walk-forward out-of-sample test based on the strategy I presented on my website last January.

Here is my follow-up article: Financing Your Stock Trading Strategy II.

I would amend my previous statement: there are some trading strategies that might not break down during these historic and volatile market moves. A simulated walk-forward test, just as you did, would tend to show this.

The attached notebook shows the same equity chart as used in your notebook for comparison.

[Updated June 26, 2020: added HTML version of notebook in order to properly display charts]

https://alphapowertrading.com/quantopian/Ranked_Selection_NB.html

I like the notion of doubling times for a portfolio. It indicates, on average, how much time was required for the portfolio to double in value. It is all a matter of the strategy's CAGR, its compounding rate.

For instance, Mr. Buffett has had an average CAGR of about 20% over the years. From my chart in the Stock Portfolio Doubling Time article, this implies a doubling time of about 3.81 years on average.

Based on the same chart, the higher the average CAGR, the shorter the doubling time, as should be expected.

Why should this be important?

Simply because this portfolio management thing requires years to unfold, many years. Mr. Buffett managed to maintain his average CAGR for 50+ years, doubling his portfolio every 4 years or so. It does not mean that there were no drawdowns, there were. He has often said he has had drawdowns in excess of 50% four times. His current drawdown is about 30% or so. But, I am not worried, he will rebound again.

The portfolio performance illustrated in my prior post might appear exaggerated to some. But in terms of doubling times, not that much. James Simon's Medallion Fund has been operating at an even higher rate. So, it is not impossible.

The November simulation gave a doubling time of about 1.85 years compared to the April results which averaged at 1.72. Most of the drop in doubling times is due to the huge profit increase in the last few months of the walk-forward. It does say that even a small change in the average doubling time can have quite an impact, especially in the later years of a trading strategy where the bet size was increased considerably. Whatever the trading strategy, in a fix-fraction scenario, you have to be ready to make those larger bets and take those large positions as the portfolio grows.

It is extremely difficult to reduce the average doubling times over the years. The reason is simple: CAGR decay, the law of diminishing returns. There is a need to compensate for this and there are tools to do so (I wrote a book on that).

Trading Is Not The Same As Investing

Trading is basically about two numbers: the number of trades executed and the average net profit per trade. Both numbers are given in the backtest analysis when using the round_trips = True option.

The task in trading is making sure that those two numbers increase with time. But no matter what, you are still subject to the math of the game. In trading, the market can offer a lot, even a lot more than what I presented. There is no secret to the math behind the methods of play.

Many times in these forums I have stressed the importance of the betting system used when faced with uncertainty. We can certainly say that the market lives in this tumultuous ocean of variance and that it is rather difficult to predict which way it is going to go from day to day. But, as a trader, you still have to find ways to make your own doubling time.

If your strategy's doubling time is 14.25 years, equivalent to a 5% CAGR, you are not going that far that fast. And if your initial stake is relatively small, it is even worse since: \( F(t) = F_0 \cdot (1 + 0.05)^t\) might not be that big even after 28.5 years on the job \(\approx 4 \cdot F_0\).

Doubling Times

The sequence for the first 10 doubling times is (\(2^n\)): 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024. Each 50% drawdown makes you lose a doubling time, and as time progresses, the value of this drawdown increases. For instance, dropping from 4 to 2 does not appear so bad when compared to the drop from 1024 to 512. That drop is 512 times the initial portfolio! Yet, both had a 50% drawdown. One should get to the conclusion that preservation of capital becomes more and more important as you move along the doubling time sequence.

Mr. Buffett managed over 14 doubling times so far. His last doubling time was for as much as he ever did in the previous 13 doubling times. It is remarkable. And his 15th doubling time will be as much as he has done over his entire career.

Every percent you add to your portfolio's return will have an impact on this doubling time.

You often see me using equations to explain what I do in my trading strategies. One that might be misunderstood is the payoff matrix, and yet, it is so simple and elegant:
$$F(t) = F_0 + \sum^n (H ∙ \Delta P) = F_0 + n ∙ x_{avg} = F_0 ∙ (1 + g_{m} + \alpha \; – \; ex)^t$$ with \(n\) the number of trades and \(x_{avg}\) the average net profit per trade. Increasing both these numbers over the trading interval will result in higher profits, other things being equal. If done over the same time interval, it will increase the CAGR.

You Are The Strategy Designer

That your strategy makes 500,000 trades at an average net profit per trade of $10, it will produce the same amount as another trading strategy making 50,000 trades with an average profit of $100.

You are the one to design that trading strategy using rebalancing or whatever other technique which will dictate how many trades the strategy is bound to make within its portfolio constraints. It becomes which strategy will produce either of the above two scenarios or anything in between or above. The strategy making 500,000 trades might not be the same as the one making 50,000 trades.

For sure, my trading strategies do “fly”. Not all of them, mind you, as should be expected. I do throw some away.

In this case, it should be noted that it took some 17.19 years of compounding to get there with progressively larger bets executed in order to achieve those results. There is math underneath to support them.

In a sense, in the end, the way I see it, you are the one to choose your portfolio's doubling time. It is all in your payoff matrix strategy design \(\mathbf{H}\).

The attached notebook is based on the same program as in my January article: Financing Your Stock Trading Strategy

Here it is tested (April 25th) compared to its last iteration April 6th (see previous post). The added 3 weeks, even if it is a small incremental time interval compared to the strategy's 17.16 years still represent a walk-forward and an out-of-sample simulation.

[Updated June 26, 2020: added HTML version of notebook in order to properly display charts]

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-2.html

My previous post demonstrated the application of a trading strategy's payoff matrix equation where the bet sizing was gradually increased to comply with equity: \(F(t) \div j\), and where \(j\) was the number of stocks in the portfolio. In the above case, \(j = 400\) stocks were used.

Any bet taken was 0.25% of the ongoing equity. It also meant that as time progressed, the bet size would increase at the same rate as its equity line. It is the compounding for 17.16 years that makes the difference. There is no secret “stuff” here. But, compounding at a high rate for 17.16 years has to show up somehow.

To achieve such numbers, the trading strategy had to supply all the needed funds internally. All trading profits were continuously re-invested, over and over again. But, even this internally added funding generation would have been insufficient to produce the seen outcome. The strategy uses some leverage. In all three cases since the January article, the leverage has hovered around 1.55 to 1.57 while maintaining about the same level of drawdown and volatility.

I understand that some do not like leveraging. It is their choice. But, in many cases, it can be an accelerator. It is why I do make an estimate of its costs and have that cost printed on the equity chart.

Leveraging has a cost, evidently, that is not a question. In the program, I used 4% leveraging fees when IB charges 1.55%. But, that is not the point. The point is that leveraging can help productive strategies do even more. In my program version, leveraging was modulated. It was not a constant. In periods of market turmoil, it was pushed way down while it was increased in rising markets, but only up to a limit. Raising the leverage to 1.60, for instance, would translate to even more profits, even if it is a relatively small incremental change. The payoff matrix equation would gradually slightly increase its bet sizing as it went along.

Whatever the payoff matrix you design, you can have your trading strategy do nothing more than the other guy, or you can force your trading strategy to do what you want or find ways for it to internally generate its own funding (using its ongoing profits) in order to accelerate your performance to higher levels than just achieving market averages or below.

In my previous post, I stated:

“Raising the leverage to 1.60, for instance, would translate to even
more profits, even if it is a relatively small incremental change. The
payoff matrix equation would gradually slightly increase its bet
sizing as it went along.”

I do not like to say things and not corroborate them. Therefore, I did the simulation, raising the leverage by 3% to 1.61. Evidently, it would translate into higher leveraging fees, small amounts in the beginning but still growing larger in parallel with equity.

Overall, it would have cost the equivalent of about 5% of the portfolio's ending equity, looking at it more as an added cost of doing business. But those added expenses were more than compensated for by the added profits generated. In that department, profits did increase, going above the presented results in my prior post by $2,129,355,839. I should re-emphasize that it took 17.16 years to get there, and compounding plays a major role in the payoff matrix equation.

Raising the leverage, even by a low 3% had for impact to increase the bet size of all trades. Redistributing the available equity differently to the 400 stocks in this portfolio.

By increasing the bet size, you also tended to increase the average profit per trade. By comparison, the last test had an average net profit per trade of $74,341, while this new simulation with its 1.61 leverage had an average net profit per trade of $89,453. Increasing the average net profit per trade by $15,113 per trade. And since the strategy did make 143,610 trades, it all added up.

You only have 3 numbers to take care of: \(n\), \(u\), and \(PT\). Might as well make the most of them. And \(u\), the trading unit is a major part of it.

Increasing the leverage by 3% is not a curve-fitting operation, it is a structural thing. A choice one can make independent of what the trading strategy does or how it does it. Leveraging allows larger bet size, it is like putting more capital on the table. It is worth it ONLY if the added profits you get exceeds the leveraging cost. In this case, it was effectively demonstrated.

On the other hand, say, you do not want to increase the leveraging, you could always accept the results of the previous simulation, or push it down to whatever level you feel comfortable with, including no leveraging at all. Evidently, going for no leveraging, you should not expect the numbers that have been presented in the above simulations.

[Updated June 26, 2020: added HTML version of notebook in order to properly display charts]

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-3.html

This is something like the 4th walk-forward for this trading strategy. Adding 6 more weeks to the last simulation while maintaining a gross average leverage of 1.56x. The simulation was done only to show what was possible. It does compare favorably, profitwise, to the prior 17-year test. Overall, the added return did not even increase the drawdown, or the volatility, for that matter. But then, who's counting?

[Added] June 22. The attached notebook does not display any of the charts. So, here is a HTML version of the same. This way you should at least see the charts.

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-4.html

This is an added test to the last walk-forward post. It uses the same strategy where it was asked to use 1.60x leverage compared to 1.56x in the previous post. That is a 3% increase in gross leverage. It is not a major change to the trading strategy. Nonetheless, using higher leverage will increase leveraging costs.

The simulation is done to demonstrate that the increased leverage is sustainable over the entire trading interval. If you do not do the test, how on earth would you know that your trading strategy can handle it? So, technically, this should be part of your arsenal of acid tests for your own trading strategies. No one is forcing anyone to do those tests. However, it is where just an opinion that your trading strategy can support an increase in leverage is not enough.

Since putting charts in an attached notebook does not seem to work, I will skip that process and provide the HTML version generated by the notebook.

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-5.html

I will let you compare the above HTML file to the one in the previous post.

What is shown in those charts is the impact of the added 3% in leverage. It increased overall return by some 21% which by itself would tend to justify at least validate the use of the added leverage.

But, then again, no one is forced to use leverage. It is always a matter of choice, preference, and risk averseness.

The above-cited strategy is one of the most phenomenal on this site with a 56.6% CAGR over its 17.3-year simulation while trading some 400 stocks at a time. The strategy made some 144,764 trades over the period. More than enough trades to start talking about averages, trade mechanics, and general behavior while still facing uncertainty.

My strategies use equations which I have provided many times before. I think anyone applying these equations could adapt them to their own tastes, circumstances, and constraints. Especially, the risk averseness part of the problem. Also, I do estimate that there are gazillions of possible solutions that could adapt to one's trading preferences.

We need to comply with the provided equations. We don't have to, evidently, but that won't make them go away. In fact, if you want to trade and succeed outperforming long-term market averages, that equation will always be in your way until you start understanding what it is and what it can do for you. When I say: “what it can do for you”, it is more how you could enhance your trading strategy by applying pressure at specific points which might marginally increase market risks but also provide better overall returns. There seems, most often, to be higher costs and higher risks in doing more business. This also applies to trading.

What I think is not that understood in the way I treat a trading strategy is the methods used to enhance portfolio performance. First, I cannot extract blood from a rock. I cannot beat heads or tails. And should the market be considered purely random, I could not win that game except by luck. I cannot predict which stocks will be there in 20 years from now or by how much. I cannot be sure which stocks will be up next week. I do not even have a probability measure for that either.

However, the thing I can do is design a trading strategy in what I consider a quasi-random trading environment where I can take advantage of the long-term upside bias seen in stock prices. It does not give me predictive powers, but I can design trading procedures that will take advantage of a long-term upside market bias when the market noise moves in my favor meaning that when some of the stock prices go up, by no fault of my own, I can still take part of that “paper” profit to enrich the trading account.

It is what you see in the above-presented strategy. I have no predictive powers, but using the portfolio's payoff matrix, I can “inject” my own guidelines into that equation to technically partially control its behavior. This can be done even from outside the program by reading a file of the ongoing controlling variable settings which I consider the pressure points having an impact on the strategy's final outcome. With this process, you can direct your program to do a little more of this and a little less of that on an ongoing basis. Technically, overriding your program as you see fit. This makes your program more than unique, it becomes your sentiment-driven personalized version.

It is a CAGR game, a small variation in the structure of the program, such as in the bet sizing function, can be propagated exponentially over the entire trading interval thereby affecting all trades. In the beginning, it is not a major change. Something like in the above-illustrated strategy where it was “requested” to increase leveraging by 3%, going from 1.56x to 1.60x. Yes, it did increase trading expenses. But, it also increased performance to such a level that the added profits more than compensated for the added expenses.

I would stress the point that I do not necessarily wait for a factor or an indicator to influence a trading decision. I inject into the payoff matrix equation my own considerations of what the strategy should do, forcing it to outperform on terms that I consider acceptable. Something like a compromise asking: can this trading strategy support a 3% increase in leverage?

I often try to push strategies to their limits to then scale back to within my own risk tolerance. This way, I at least know that the strategy can support it if needed or desired. For instance, increasing the gross leverage by an additional 4% would certainly have an impact even though it still might not be the program's limit yet. Doing so, it raised performance with a slightly higher drawdown (going from -30% to -32%). The question would be: would you support the added 2% of drawdown? See the HTML file below for an answer.

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-6a.html

(The above HTML file was generated in its notebook. When attaching a notebook, my charts are not displayed for some reason).

The concept that should be retained here is the use of the portfolio payoff matrix to inject “my own do this thing” on top of what the market is doing, thereby, generating some controlled alpha since “my own do this” are more like outside administrative procedures instead of relying on the strategy to do its thing based on some factors or indicators. Operating differently than most does not make the trading procedures bad or wrong, they only make them different. And from the above simulation, more than quite productive, to say the least.

The stock market is not there to give you money, you will have to work for it, and for a long time if you want to make it worthwhile.

Another Walk-Forward

My previous post was about my fourth walk-forward which was also, de facto, an out-of-sample simulation. This new notebook will be the strategy's fifth walk-forward to be chronicled here. It is based on the initially posted strategy which served as starting point. However, that strategy was considerably modified to answer mostly to equations rather than factors or indicators. Was also changed the strategy's overall objectives by using some behavioral trade reinforcement procedures to increase its compounded returns. It used adaptive modulated leverage and an exponential bet sizing function to increase its average profit per trade while seeking higher volatility. Return degradation compensation measures were also applied, not to a great extent mind you, but sufficiently to reduce it. I'm still learning Python, meaning that this can be improved even further.

It Still Deals With Market Noise

This did not stop the strategy from dealing mostly with market noise. Not in predicting market-noise, but just taking action after prices had moved one way or the other for whatever reason. The strategy is designed as a trend-following system trading some 400-momentum ranked stocks. The portfolio was rebalanced on a weekly basis over its 17-year simulations.

It is the rebalancing method used that controls the number of trades that will be taken. We can even make reasonable estimates as to the expected number of trades \(E[n]\) that will be executed over the life of the portfolio.

Compounding Bet Allocation

What makes the strategy worthwhile is the compounding on its bet allocation structure, its projected growing bet size. It was expressed as \( u(t) = F(t)/j\). This trading unit function automatically made the strategy scalable and also fully invested. Maybe a better way of expressing it would be: \( u(t) = (F(t) ∙ (1 + \gamma_{avg})^t) / j\) where \(\gamma_{avg}\) is the average rate increase place on the betting function. Note that all these curves are erratic, chaotic, and random-like. But that should not stop you from using them as objective functions and make them do what you want them to do.

Protective Measures

The strategy used a different method for its protective measures. Instead of going to bonds or cash in periods of market turmoil, it went short. It started to go short about 20 times over the 17+ years of trading. Most of the short attempts were short-lived, only 5 cases went fully short as the market declined further. And as protective measures, they did the job where it counted most, during the 2008 financial crisis and during the coronavirus pandemic.

Leverage

Leveraging was used to accelerate performance. It was considered as an added expense of doing business. As long as the added alpha could cover the added leveraging expense and more. It would generate higher profits which were reinvested into the system to buy even more shares as \(u(t)\) increased.

An automated trading strategy has for long-term mission, starting with its initial capital, to generate all the cash it needs to make its trades. In simulations, strategies are designed and forced to generate these added funds from within. It is the added net profit per trade that enables taking larger positions as profits increase. The counterpart also holds: if overall profits decline, the bet size should too, as per the above bet sizing equation.

A Walk-Forward

To do a walk-forward, you need to let some time pass. In notebook 4, the time interval was from 2003-01-03 to 2020-6-11 (17.16 years). This notebook covers from 2003-01-03 to 2020-7-24 with an added 6 weeks. I know it is not much of a walk-forward, but it is still a walk-forward nonetheless. Over those 7 backtests, it is still over \(3 1/2 \) months of walk-forward. At least, it does demonstrate that the strategy did not break down going forward.

Here is this new walk-forward, notebook 7:

https://alphapowertrading.com/quantopian/Ranked_Selection_NB-7.html

It made the point that leveraging even at the 1.5x to 1.6x range can generate outsized returns. Here are some of the numbers extracted from those 7 simulations.

https://alphapowertrading.com/quantopian/Rank_Sel_S400_F10_NB_Summary.png

All the notebooks followed the equations given below in their own way with small variations on some of the variables as the total outcome was “forced” to go higher. Most of these changes were of an administrative nature, decisions that were or could be taken from outside the program if desired. Which is why I often say: it is a matter of choice what you want to do with your automated trading strategy. You are the one to determine how aggressively you want to trade. But, one thing is sure, you have to be consistent with your overall goals and trading objectives. And these objectives better comply with the underlying math of the game.

From the above table, it can be observed that final numbers for portfolio metrics such as max drawdown, annual volatility, beta, Calmar ratio, Sharpe ratio, and daily turnover were relatively stable over the 17 some years simulations of those 7 notebooks. And yet, the CAGR progressively rose as more pressure was applied to some of the payoff matrix equation variables. This is kind of counterintuitive. We were all taught that to increase long-term returns, we had to accept more risks. And yet, putting pressure on equation variables, you could increase portfolio performance from what amounts to making administrative decisions.

Average Net Profit Per Trade

Of note in the above table is the gradual increase in the average net profit per trade \(x_{avg}\) as the simulations progressed from NB-1 to NB-7. This was achieved with a relatively minor increase in beta and annual volatility. As time was increased along with trade aggressiveness, the CAGR increased.

Some wonder how can this be done when you have equations governing the thing? The answer is relatively simple: you concentrate on the different variables in these equations. When broken down, these equations are just equivalent expressions for the whole strategy's payoff matrix:

\(\quad F(t) = F_0 +\$X = F_0 + \Sigma (\mathbf{H} ∙ \Delta \mathbf{P} ) = F_0 + n ∙ x_{avg} = F_0 + y ∙ rb ∙ j ∙ E[tr] ∙ u(t) ∙ E[PT] = F_0 ∙ (1 + g +\alpha - \Sigma exp_t)^t\)

Anything that does not affect these variables is of no consequence to the bottom line. One could segment some of these variables as was done with \(n\) and \(x_{avg}\) or elaborate on more sophisticated ones.

If you do the same thing as everyone else, should you not expect to get about the same results? If your trading strategy is not made to last, where is it going CAGR wise?

If you need to innovate to control your own payoff matrix equation, should you not at least try? From my observations, whatever you do and/or however you do it, the above rebalancing payoff matrix equation will prevail, so, why not learn to manage it and control it?

Actual achieved results would be a far greater testament to the glories of this algo. Oddly enough, no one seems to have provided such testimony. Funny old world, is it not?

You do a simulation to see what your trading strategy would have done in the past. That past is not available anymore and you certainly cannot profit from it.

It would take another 10 to 17 years to see the merit of this trading strategy. This is not a game of instant gratification. Even if it takes a few minutes to run a backtest, going forward would require more than 4,300 trading days.

One thing is sure, if your trading strategy could not have survived over some long-term passed market data, it would have quite a hard time surviving going forward.

A simulation is just that: a simulation. Its purpose is to show what your trading logic and procedures could have done in the past.

A walk-forward is done to see if your trading strategy would have well behaved going forward making it out-of-sample. And I think the strategy currently passed that test 5 times.

Now that the program is there, it would have to be restarted with the present date and ran for some years. To compare it with its passed simulation, the program would have to wait until 2037 which is not tomorrow.

It is ridiculous to consider a program that was designed in 2020 to have been there in 2003. That is why you do a simulation of the thing in the first place. To see something like “it would have done this or that” over those years using those trading procedures using this or that market data.

Also, a walk-forward is a continuation of what was. You cannot start trading from the end of a simulation. You have to restart the program from its future day one with its new initial capital.

Oddly enough, no one designed such a trading strategy in 2003. And not so oddly enough, no one could provide any kind of testimony that they did. Logic, at times, seems to be an elusive commodity.

Nobody in the real world has the time or inclination to mess around with such matters and hope to make a living out of it. not unless they are selling a trading engine, books or seeking traffic on a website. As we have seen, sadly, Quantopian has and continues to be largely a waste of time and the proprietors have turned instead to selling software rather that managing money.

The hedge fund world has come and gone. Its strategies have not stood the test of time. I am equally guilty and have wasted much of the past 20 years chasing the back testing phantom.

Old men like you and I may cook up grand theories but we make no money in the process.

The answer is to find an edge that is based upon reality, not back tested dreaming. And to milk that edge for all it is worth before it disappears. HFT is the big success story in recent times: front running is a real edge. People need real edges to exploit not back tested phantasy.

Some famous person said:

Theory without practice is dead, practice without theory is blind.

My Dear Vlad (the Impaler I assume?)

You are so right. Unfortunately this forum and all other trading forums are long on theory and short on practice. There may be a small (very small) number of people who have used Quantopian to achieve absolute or risk adjusted returns greater by far than the benchmark. In an actual real live brokerage account!

The best theories (and those which may be brought most easily into profitable practice) are probably not that complex. "Buy this stock ahead of a massive pending order you have detected". "Buy because your chum the lawyer tells you this stock is about to become a takeover target". "Buy because this or that crook is about to launch a pump and dump scheme on this penny stock". "Sell because this shark is about to dump his stock having pumped it". "Buy this IPO because it is priced by the syndicate at a small fraction of its perceived market worth".

I am sure there are other less devious edges I have missed out.

An example used to be in the Thai stock market where people just didn't get splits and the price was often bid up to pre-split level.

I have traded little since I was blown out of the water by a kindly US Senator who blew up MG Global a decade back.

I am interested in edges where winners exceed losers and winning trades exceed losing trades. By a large degree. Such edges exist from time to time. They can perhaps be detected and traded by algorithm.

I have no interest whatsoever in back tested riches dreamt up with the aid of unrealistic leverage and never traded.

But there you go!

I am sure such folly keeps some of us amused.

Dear Anthony,

I am not from theoretical camp but I am always trying to find the theory behind my practical success.
Your trading approach

I am interested in edges where winners exceed losers and winning trades exceed losing trades by a large degree.

may be described by a simple version of mathematical expectancy

Exp_ret = (Avg_ret_winning_days*N_winning_days + Avg_ret_losing_days*N_losing_days)/(N_winning_days + N_losing_days)  

In this case I am the opposite of Impeller, just trying to unite the shards of sanity in this forum.

Theory without practice is dead, practice without theory is blind.

Jim Simons of Renaissance Technologies hedge fund is quoted as saying: “Past performance is the best predictor of success.” And he is right.

It is by studying what happened in the past that we can kind of “predict” what might happen in the future. This goes for stock prices as well. As a matter of fact, every listed public company is making forecasts all the time on what they want to do going forward.

Not all public companies prosper, not all companies survive, but as Simons has also said before: “The things we are doing will not go away. We may have bad years, we may have a terrible year sometimes. But the principles we’ve discovered are valid.”

Renaissance for over 30 years has managed an average 39% CAGR net of expenses for its flagship fund. This is after fees of 5/44, much higher than the usual 2/20 for common hedge funds. To achieve this, it necessitated a 66% compounded return on their automated trading strategies. Trading in Renaissance is done by computer programs. They have years and years of experience doing it and they can keep on doing it. This, as a minimum, shows that it is possible to achieve high long-term CAGRs, and Renaissance also has the track record to prove it (some classify this as fact and not as an opinion).

We do simulations for a lot of things in many domains. A jet pilot will have 100s of hours in a flight simulator before flying a real plane. Doctors will practice their surgeries on dummies before cutting open a real patient. Astronauts will simulate for months their next voyage in space. We simulate all the time. And past simulated results are the best indication we have that we can do these jobs to the best of our abilities. There are no guarantees, we all try to do the best we can with the acquired knowledge we have just like everybody else.

You either see humanity continue to better itself or you don't. Either way, you have decisions to make. Either way, those decisions can make you money. I design long-term trend-following trading strategies because just like Mr. Buffett, I do not intend to bet against the prosperity of America.

Now, if some think that everything is going down the drain, you will find that doomsday scenarios have few options. One of which is short every stock where you could also make it big. But note this did not work so well over the past 200 years. Also, at the end of each day, all company shares on the planet are in someone's hands.

You could also opt NOT to participate in the investment game, and then whatever you might say about the markets is totally inconsequential. Having an “opinion” not substantiated by some facts is just an opinion, at most a guess, and its value could considerably be deprecated if not worthless.

When doing a simulation, I am not that interested in the final performance numbers per se. What I am looking for is the overall behavior of the trading procedures and will those procedures be applicable going forward. It is why I can definitely “predict” that future long-term market prices will be higher and I will continue to design future strategies with that background upside bias in mind.

Any Renaissance fund other than Medallion has had a terrible year. Medallion is HFT. Go figure!

Guy's observations (I only read the first post at the top before sliding off my chair!) are valid - the focus on looking for alpha through non market price data (e.g. earnings) is reactive, and any backtested performance of strategies attempting to profit from them will be vulnerable to over-fitting, which seems to be Zeno's concern (datamining). I also think that Zeno has a profound observation that simple small "edges" need to be exploited systematically. These "edges" are glimpses of the underlying structure in the markets and not to be confused with the "laggy" alpha factors that Guy is critical of - you can be sure this is not the approach Jim Simons is taking. The focus on this site of looking to combine large numbers of small "alpha factors" each of which are laggy, is I believe a mistake. I started to sense this in the competition, when one of the requirements was to not have too much of a dependence on mean reversion.
What Quantopian fails to realise is that that the Renaissance models are so good, they are trying to project deterministic structure and are predictive rather than reactive. Quantopian's defeatist position seems to deny that such structure exists. Quantopian does have awesome infrastructure, and programmers (like the brilliant Dan) - why not create a pure alpha market/competition where contributors can sell their alpha to hedge funds (in the same way that QuantConnect does)? This would be very simple using Alphalens and prevent mediocre alpha being flattered by trading system tricks. Just imagine if you had 300 alphalens tear sheets for the 300 top submissions from the community of pure market structure alpha factors - that would be valuable to even Renaissance.

Data-mining is only one of my concerns, albeit a major one. An equally great concern is the uselessness and unreliability of many of the so called "edges" which people are relying on. And which inevitably disappoint.

Most of what is written about, most of what is back-tested is old, tedious, repetitive nonsense. Even much of the machine learning applications for finance use the same tired old stuff.

An esteemed colleague pointed me in the direction of a machine learning library for finance on github. I took a look at the documentation....and yawned. Perhaps there is some good stuff among the locked up features, but the publicly available stuff covered (drum roll!) momentum, mean reversion, mean variance optimization, and various tools to explore such features via machine learning.

The only area which really interests me is sentiment analysis. At least it is still relatively unusual. It may prove as big a crock of sh*t as all the boring old lagging indicators we are so familiar with. There again perhaps it can sniff out takeover targets, stocks about to be manipulated...some of the real edges I describe above. Perhaps not.

My point is that the vast majority of armchair finance researchers are stuck in a rut. Even if most of them are all theory and no practice. And thus doing no real harm.

Stock Portfolio Rebalancing

You start a stock portfolio with the intention of using scheduled rebalancing, meaning that the stocks in your portfolio are readjusted to a fixed weight on a yearly, monthly, or weekly basis. This portfolio management decision is simple, however, it does have ramifications.

An equal weight is easy to determine, it can be made proportional to the number of stocks (\(j\)) in the portfolio \( w = 1 / j\). It does not say which stocks will be in your portfolio, only that the actual number of stocks will tend to \(j\) or less: \(\to \le j\). Fixing the number of stocks to be traded will also set the initial bet size which will depend on the available initial trading capital.

Another decision, part of the portfolio's setup phase, is setting the strategy's initial capital (\(F_0\)). Just by having fixed the initial capital and taken the decision to trade \(j\) stocks has also set the initial trade allocation. These portfolio management decisions are made prior to running any simulation or even going live.

The Number Of Stocks Traded

The fixed number of stocks in the rebalancing portfolio is independent of the strategy's trading logic or procedures. It has nothing to do with the actual stock selection process even though it will matter.

You cannot be biased on the number of stocks your trading strategy will deal with. There is no data mining in that strategy design decision or any kind of artificial intelligence or deep learning in selecting the “number” of stocks to trade. Nonetheless, there are some considerations and common sense to apply.

The number of stocks that will be treated in your portfolio is not the strategy's choice. It is your own. That you want 5 stocks, 400 or 1,000+. It is not the strategy's code that is making that choice but it will regardless be important.

Trading 1,000 stocks on a $10,000 initial capital should be considered ridiculous. The initial bets would be $10 per position and a 2% average net profit on such positions would be $0.20. You would most certainly need fractional shares and zero fees for such a scenario. Also, to make $1M in profit would require 5,000,000 trades. I think the point should be clear. The number of stocks in your portfolio should be related to your initial capital in order to get a reasonable initial bet size. The formula for this is also easy: \((1 / j) ∙ F_0\) where \(F_0\) is again the initial trading capital.

A Simple Scenario

You want each position in your portfolio to represent a 1% risk exposure: \(w = 0,01\), then \(j = 100\), and the bet size will be \(F_0 / 100\). How big should \(F_0\) be to make it a risk-averse bet size while still making it a reasonable trading proposition?

Here is the rebalanced trading strategy equation reproduced below from my previous article: Another Walk-Forward https://alphapowertrading.com/index.php/2-uncategorised/379-another-walk-forward:

\(\quad F(t) = F_0 +\$X = F_0 + \Sigma (\mathbf{H} ∙ \Delta \mathbf{P} ) = F_0 + n ∙ x_{avg} = F_0 + y ∙ rb ∙ j ∙ E[tr] ∙ u(t) ∙ E[PT] = F_0 ∙ (1 + g +\alpha - \Sigma exp_t)^t\)

We are interested in the part: \(F(t) = F_0 + y ∙ rb ∙ j ∙ E[tr] ∙ u(t) ∙ E[PT] \) representing the rebalancing portfolio of which the variables \(y, \, rb, \, j, \, E[tr]\) are by definition positive numbers where their product equate to \(E[n]\) the expected number of trades over the entire trading interval. The number of trades is necessarily equal to a positive number or zero: \(n \ge 0\).

Rebalancing Schedule

The rebalancing schedule determines the number of trades per rebalance. For example, over a 20-year period \(y = 20\), on 100 stocks \(j = 100\), rebalancing every month \(rb = 12\), we would get at most: \( y ∙ rb ∙ j = 24,000\) trades. However, not all stocks are readjusted at rebalance, only a fraction are as given by the expected average turnover rate \(E[tr]\). The estimated turnover rate can be obtained from the very first simulation done on a strategy.

With an estimated 80% turnover, the expected number of trades would be: \(E[n] = y ∙ rb ∙ j ∙ E[tr] = 19,200\) over those 20 years. For a 10-year period, the estimated number of trades would simply be cut in half. Extending the rebalance to 30 years is also an easy estimate to make \(E[n] = 28,800\) trades.

Without any trading logic or other trading procedure than the rebalancing you already know or have a good approximation of how many trades will be performed over the years. You know nothing about the entry and exit points and therefore nothing about the strategy's profitability, but you do know that those trades will occur. There is nothing mysterious in that part of the payoff matrix equation.

Furthermore, the payoff matrix equation also says the following: \(F(t) = F_0 + \Sigma (\mathbf{H} ∙ \Delta \mathbf{P} ) = F_0 ∙ (1 + g +\alpha - \Sigma exp_t)^t\) where whatever your trading strategy does, it will translate to how much you did put on the table, at what growth rate you could operate and for how long. The formula has a provision for the added trading skills (alpha) and the added incurred expenses. You could put in your numbers in that formula to make an estimate of where you would want to go and see what would be required to get there.

Again, no need for anything artificial.

The rebalancing is fixing the game that will be played.

The structure of the program itself is dictating the number of trades that might be executed over its next 20+ years. You can reasonably “predict”, “forecast”, “guess”, “extrapolate”, or “estimate” the number of trades to be executed \(E[n]\).

Put your numbers in the above equation based on your own rebalancing strategy. Then see how you could improve on those numbers. The numbers that can have an impact on the bottom line are part of the payoff matrix equation above. It is up to you to make those improvements, the code will not self-generate, that's for sure. You have to do the job yourself or have someone do it for you.

The Trading Unit Function

The trading unit function \(u(t)\) should be of your own design, or at least, you should strive to make it so. The bet size at times can be positive or negative in order to represent the amount placed on longs or shorts respectively. This function is often neglected, and yet, it is one of the most important in a trading strategy.

That part of the payoff matrix equation \(u(t) ∙ E[PT] \) is equal to \(x_{avg}\), the average net profit per trade. And this says the more you trade, meaning the larger \(n\) might be, the more the trading becomes a statistical problem. If you do 144,000+ trades (for example, see https://alphapowertrading.com/quantopian/Ranked_Selection_NB-4.html), you cannot consider those trades on an individual basis but as part of the average. The 144,001th trade will not have that much of an impact on the overall average net profit per trade unless it is a very large outlier.

If you rebalance 400 stocks every week for 20 years with an expected 60% turnover rate, you should make an estimated: \(20 ∙ 52 ∙ 400 ∙ 0.6 = 249,600 \) trades, more or less. All you can catch is the weekly increase for any of the stocks in the portfolio. The price of any of those stocks will not move more than they will because YOU are trading them. Stocks will continue to move up and down on a quasi-unpredictable basis that you predict what is to come or not. Stock prices are not totally randomly distributed. They do have long-term memory as is shown in any long-term market chart. However, they do exhibit a lot of chaotic and random-like price movements. Stocks are still slightly biased to the upside, and therefore, any long-term strategy's payoff matrix should also account for this bias.

For over 200+ years, the US market has had a 20-30-year rolling window return close to a 10% CAGR, dividends included. You are somehow forced to look at the problem with the same kind of perspective. The market has survived and prospered for a long time, will your trading strategy do the same?

Doing close to nothing else than buying a low-cost index fund can get you there, meaning getting the average market return. Therefore, why program something that you will have to monitor for 20 to 30+ years knowing that you will not outperform a common market index?

We need to be realistic in what we do to achieve more than the market index. We need to be consistent with a common-sense approach made to last over the long term no matter what you want your payoff matrix to do.

No one is discussing the equations used in my strategies, no one has proven them wrong either. After much study of these equations, I came to the conclusion that if you wanted more than the other guy, you had to do more than just “try” to do better, even if it meant reengineering the methods you intended to trade with. Also, there are a multitude of solutions to the above equations.

Due to the periodic rebalancing, you get \(E[n] = y ∙ rb ∙ j ∙ E[tr]\). It gives you a pretty good approximation as to the number of trades that will be executed over the life of the portfolio. Three of those variables are set from outside the strategy as if administrative decisions. The expected turnover \(E[tr]\) will depend on the trading methods used. However, your first simulation will give you an approximation of that number, and therefore, not so hard to get.

The efforts should be put on \(x_{avg} = \frac{\Sigma (\mathbf{H} ∙ \Delta \mathbf{P} )}{n}\), the strategy's real trading edge, and this means concentrating on \(u(t) ∙ E[PT] \). It is why u(t), the trading unit function, is so critical to your trading strategy. The expected profit target \(E[PT] \) could originate from a generalized stop profit function, a profit target, some kind of trailing stop loss, or some hybrid combination thereof.

However, due to the constant weekly rebalancing resulting in mostly trading on market noise, we should not expect this profit margin to be that high. Nonetheless, we still can find ways to have it slowly increase with time. That was the challenge. And I think that is what was demonstrated in my extensively modified version of the above-cited program where the trading unit function u(t) was put on steroids.

My latest article deals with the need to make an estimate of the average net profit per trade for a rebalancing portfolio as the one highlighted here.

In previous posts the case was made for making a reasonable estimate of the number of trades that will be executed over the long term for a rebalancing portfolio. This new article gives a preliminary understanding of the problem hopefully leading to also make a reasonable long-term estimate for the outcome of a trading strategy as was expressed in a rebalancing portfolio's payoff matrix:

\( F(t) = F_0 + \$X = F_0 + \Sigma (H ∙ \Delta P) = F_0 + E[n] ∙ x_{avg} \)

\( F(t) = F_0 + E[n] ∙ x_{avg} = F_0 + \displaystyle{ E[n ∙ \frac {\Sigma (H ∙ \Delta P)}{n}]} = F_0 + y ∙ rb ∙ j ∙ E[tr] ∙ u(t) ∙ E[PT] \)

Part of that equation is predetermined by purely administrative decisions related to the rebalancing procedures. For example, even before running a simulation, we set the initial capital \( F_0 \), the rebalancing interval \(rb\), the number of stocks to be traded \(j\), and the number of rebalance \(y ∙ rb\). The case is made that it is by understanding the above equations better that we can design more productive long-term trading strategies.

This kind of trading strategy could be operated by hand. It is rebalanced only \(rb\) times during the year. A simulation is done to illustrate that it was at least possible to do so over an extended time period using historical data. The simulation is there to confirm that the trading procedures used could have worked in the past, and since the same principles would apply going forward, although giving different results, you could feel confident that your trading strategy could survive over the long term and help you prosper.

Link to article: https://alphapowertrading.com/index.php/2-uncategorised/382-average-net-profit-per-trade