Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Reengineering For More

I stated previously in A Cloud & AI Strategy thread, that if you wanted more you could add a little bit more leverage, and since the leveraging is compounding, it would have a direct impact on the overall performance. Evidently, it would also have an impact on the portfolio metrics.

I do not like to make such statements and then not show that, in fact, it is what the program would do. The following should be compared to the 264,842% scenario presented in my previous to last post in the cited thread:

By simply presenting something like this you hit what I would call the credibility factor. Can this really be done? To which I have to answer: yes. The program has the same architecture as Stefan's originally presented program. What I added to his “template” are my own reengineered long-term trading functions, procedures, and protection measures in order to allow the strategy to trade more and expand its average profit margins.

I dealt with the whole payoff matrix as a block from start to finish and not just over its original moving lookback period. Giving the strategy a long-term outlook of things.

It is not the first time I take a published trading strategy and literally make it fly. Sure, there is the use of leveraging which should be considered, in certain scenarios, as a double-edged sword. It works fine if your strategy has a high enough positive return to cover the added costs associated with the leveraging. Saying that the added return has to more than compensate for the added cost of leveraging. Otherwise, you are shooting yourself in the foot, almost by definition.

Here are the portfolio metrics with the added leverage:

The max drawdown increased from -31.4% to -33.7% which occurred mostly during the financial crisis and its immediate aftermath and where the actual average market drawdown during the financial crisis exceeded -50%. For the added “pain” (-2.3% added drawdown), the portfolio generated some $3.8B more.

The smallest positive return occurred in 2008 during the financial crisis where short positions were taken as a protective measure instead of riding it out or hedging positions. Note that all years had positive returns, with a remarkable positive push in the 2009 recovery.

To increase performance did not require that much of an effort. Gross leveraging went from an average of 1.62 to 1.63. And all the while, the overall strategy's stability remained at 1.00. It required accepting a higher volatility measure which went from 30.7% to 30.9%. Remarkably, the beta slightly improved going on average from 0.53 to 0.52. Moves not significant enough not to attribute them to the randomness of the draw and portfolio variance.

The portfolio statistics generated:

The increase in CAGR terms is not that much either. However, this is compounding and the impact over time can be considerable. The why one should have a long-term vision in the development of his/her trading strategy. The CAGR came in at 70.5% compared to 68.9%. A difference of 1.6% return-wise and it produced $3.8B more.

The strategy can be pushed even further. I've tried it and it does. However, this is where the credibility factor comes back in. Already, performance is at such a level that few would even consider that it is possible to achieve such results when, in fact, with the change in perspective, it is relatively easy. But then, we all have to make choices.

27 responses

Might want to keep this code IP close to the chest but going back in time to some of your older posts is there a backtest you'd be willing to share and talk about? Perhaps in another new thread? It could maybe also be used as a tutorial on some advanced math?

@Blue, over the past 2 years, I have covered a lot of the inner workings of my trading methodology here and on my website. I find it relatively simple and hope that from what has been presented, anyone could reengineer their own strategies to make them fly. This way everyone would be responsible for whatever they do.

What I do is based on a different understanding of the game since I do accept a lot of randomness and its implications in the evolution of stock prices. I would recommend you look especially at my recent work, but you will find the same approach covered in different ways since 2011 and prior in my free papers.

I consider that I only scratched the surface of possibilities and that there is a multitude of ways of finding acceptable compromises in this profit generating endeavor. Looking for the ultimate trading strategy, for me, is simply utopian. Just find one you like, or a dozen if you wish, and then live with it. You will not be able to know which would have been the best until you reach the end-game anyway, and that is 20 or 30+ years from now.

All we can do is design decent trading strategies, do our homework the best we can by testing under the most realistic market conditions, assume all frictional costs and try, in some way, to forecast for the long-term.

It is why I got interested in the above strategy in the first place. It dealt with a niche market that is made to continue to prosper over the coming years. It is like trying to answer the question: will we need more computing power in the years to come? Having been in computing and software since the '70s, I definitely answer yes. We will need a tremendous amount of it and the companies involved in supplying us that power will benefit just as we will using more powerful machines distributed all over the world especially with the advent of G5 and autonomous almost everything.

De Prado in a recent lecture for Quantopian said: “we look at past stock prices, but we also erase their history when doing so” or an equivalent. I think he is right. Just as when he showed that the Sharpe ratio would tend to increase the more we did strategy simulations, and therefore, would become unreliable as a forecaster of what is coming our way.

But, the reasons he gives for the phenomena might not be the right one. The equation for an asset's Sharpe ratio is: \(SR_i = \frac{E[r_i – r_f]}{\sigma _i} \) which has a long-term historical value of about 0.40 when considering the average for the whole market of traded stocks.

An average is just that, an average. But the problem changes when you add some alpha into that picture as in: \(SR_i = \frac{E[r_i – r_f] + \alpha_i}{\sigma i} \). It transforms the return equation into: \( F(t)_i = p_0 \cdot (1 + E[r_i – r_f] + \alpha_i)^t \) or \(F(t)_i = p_0 \cdot (1 + \beta \cdot E[r_m] + \alpha_i)^t \). If you play the market average, the beta will be one.

The consequence is that it is not because you are making more Monte Carlo simulations that the Sharpe ratio is rising almost as a sigmoid, it is due to the very nature of the average price movement itself. It is a compounding return game and the general market does have this long-term upward bias.

De Prado demonstrated in one chart the behavior of the Sharpe ratio as the number of tests increased. I accept the curve but not necessarily the reason. The Sharpe ratio should look more like a sigmoid function due to the Law of diminishing returns.

But, because we erase the long-term history (price memory) in our short-term calculations, we are erasing one of the components of another representation for security prices as viewed in a stochastic equation, for instance: \(dp_t = \mu \cdot p_t \cdot dt + \sigma \cdot p_t \cdot dW_t \), where we technically and practically annihilate the drift component \( \mu \cdot p_t \cdot dt \). Doing this removes the underlying long-term trend and leaves us with a Wiener process.

This can be view in a simple chart too. I took the Sharpe ratio chart from the last test presented.

What the above chart shows is that the Sharpe ratio was relatively contained. Not knowing better, we might assume that it operated within boundaries as an erratic cyclical function to which was superimposed a random and noisy signal, when it should have been increasing with time as described. Where did the expected rise go since there was alpha generation?

We need to better understand the math of the game, and if necessary, even question old premises that might have applied to Buy & Hold scenarios and the efficient frontier but are ill-fitted to describe dynamic trading systems having compounding alpha generation.

@Guy,

The maths is very impressive, would it be possible for you to show a use case on a simple algorithm for those less mathematically inclined? I know I for one can understand maths far better if it's in code form.

+1
Guy,
You already have at least 3 not so bad students.
Maybe it's time to open Guy Fleury's school "Quantopian Practical Reengineering for More"?

Agree with Jamie. If there is a simple example showing how to do 're-engineering for more', it would be far easier to understand the behind concept.

@Jamie, I have absolutely no obligation to post anything here, just like anyone else. However, if I post something, I stand ready to explain and discuss within my own IP disclosure limits what a trading strategy does and for what reasons it does it.

This way, anyone could design their own variations on the same themes. It might even help someone design something new. For me, the advantage is that you become the designer of your own thing and therefore become responsible for your own stuff. I find that acceptable. For anyone sharing snippets of code, templates, tearsheets, and trading ideas, as I have said before: thanks. Of note however, not many of those shared things have value. Am I explicit enough?

I think the best way to understand this trading strategy is to look back at another one which is based on about the same trade mechanics. Please, read again my comments from start to finish in the Robo Advisor thread where the CVXOPT optimizer controlled trading activity. Ignore the charts and other comments. Just follow the linear strategy description given, and at the end, ask the question: how could I, in my own way, reconcile into code what was said? Because that is the question.

Both this strategy and the Robo Advisor thing have different architectures. Yet, both were able to generate in excess of 200,000%+ over 14 and 15 years respectively. All of it, due to their respective trading mechanics. In both cases, I used a long-term perspective in my strategy design, and looked at the strategy's payoff matrix as this big block of data (H∙ΔP) over the entire trading interval. I cannot do anything with the price matrix P, it is recorded history. However, I can add some elaborate and sophisticated function(s) to control the inventory part of the equation in any which way I want Σ(H∙g(t)∙ΔP). It is entirely at my discretion as to what I want to put into it.

I am a big fan of Mr. Buffett's long-term investment methods which I try to incorporate into my strategies as much as possible. However, on top of this long-term view of the market, I use trading as an additional funding mechanism which enables raising performance by recycling generated profits in new trades, a positive reinvestment feedback loop.

All of it is governed by equations, of which the most important, as described in my latest book is:

\( \displaystyle{q_{i, j} \cdot p_{i, j} = \frac{\bar \gamma_j \cdot F_0 \cdot \kappa_j \cdot (1 + \bar \gamma_j \cdot (\bar r_m + \bar \alpha_j + \bar \alpha_r + \bar \psi_j + \bar \varphi_j) )^t}{max(j)}} \)

with the section: \( (1 + \bar \gamma_j \cdot (\bar r_m + \bar \alpha_j + \bar \alpha_r + \bar \psi_j + \bar \varphi_j) )^t \) has a set of controlling parameters.

Notice that the bet size \(q_{i, j} \cdot p_{i, j} \) is not only an exponential time function, but is also modulated by these compounding parameters. This way the system remains trade agnostic, scalable, and can compensate, as the portfolio grows, for return degradation due to the Law of diminishing returns. It also gives it its exponential equity curve.

In the strategy's development process, it is at the end that I added protective measures (like during periods of market turmoil or during the financial crisis for instance), and not starting with them. There is more than one way to do about the same thing even if we chose different methods to get there. We could all design variations of those protective measures and they might span more or less the same time intervals. The point is just to have some in, somehow.

Ending with the protective measures, instead of starting with them, enabled to know how far the trading strategy could go. This way, I could see not only how high it could go but also how low it could get if I did not put in the downside protection. Doing this also justified why it was wiser to put in these protective measures. Adding protective measures at the end also enabled to see how high the strategy could soar in periods where market turmoil was more subdued and where more upward pressure could be applied knowing that the strategy's controlling functions could support it.

In the Robo Advisor trading script, the trading activity is delegated to the CVXOPT optimizer. Some stated that the program, in that case, was over-fitted, even issued disclaimers.

I understand that the optimizer, by definition, has for mathematical function to optimize the stuff it is presented with. But, it is like a black box, where you supply it some data and it gives out its feasible answer, that you like the answer or not. Therefore, the over-fitting modicum might be hard to accept since the optimizer did not care what it was fed. Neither could you know, in any way, what it would do next: buy, sell, short, hold positions, at what times, what prices, in which stocks, and in what quantities. How can you optimize, or curve-fit, what you cannot see?

How can you over-optimize an optimizer? Does it not do its job the first time around? Shouldn't there be nothing left to optimize after you optimized? BTW, should an optimizer over-optimize its job, then it would put in question any trading strategy using one, including all strategies participating in the contest, and might even render all those strategies useless.

The optimizer cannot discriminate or single out a specific strategy and declare it bad! Only people do that without seeing the code, without understanding what a strategy is really doing, and without having supporting data of any kind to justify their “opinions”. But, they will give them out anyway.

You can over-fit, if deliberately, you are seeking the best parameter values for your trading strategy. But trading decision control was passed on to the optimizer to do whatever it does.

With the current strategy, trading its 25 stocks, the controls are in the program, no trading decision delegation. And yet, you also get interesting results. Here, you control the equations and the degree of trade aggressiveness as demonstrated. Nonetheless, you can still push for higher performance even if you do not use an optimizer.

Two very different trading methods, each adapted to their respective designed objectives and controlled by equations. This is not how people design their trading strategies here. It is what makes it different and innovative.

It also shows that there are other ways to design strategies. It does not say that those designs are bad, only that they are different, and in these two cases, more than just productive, even if I have to say so myself.

Hi Guy,

I completely understand you not wanting to share an algorithm and appreciate the time you've taken to respond. If you're sharing equations though, would it be possible for you to define the variables?

I'm looking at the equations you're sharing with no idea what F0, kj, j or any of these variables are. Are they stock related? Are they based on the movement of the stars? Is it to do with the price of eggs in the supermarket near where I live? The equation is admittedly pretty, but completely meaningless to me (or to even those with the high level of maths talent you obviously possess) without insight into what the variables are.

@Jamie, those variable names expressed averaged out functions: dampers, boosters, accelerators, amplifiers, and controllers. As their names imply, they are made to increase or decrease the impact of the controlling functions as the strategy moves along. Each playing their part somewhere in the program with the meaning you would give to those names.

The objective is to gain some control over where your strategy is going and at what speeds. You want to push to the upside when the market is going up and evidently to the downside when it is going down. Nonetheless, you put in added precautions to the downside using dampers subject to controllers. However, to the upside, you can give amplifiers, accelerators, and boosters a little more room.

The ongoing bet size: \(q_{i, j} \cdot p_{i, j}\) reads quantity on trade \(i\) for stock \(j\) at price \(p_{i, j}\). All those bets are part of the cost matrix (H∙P) which has for elements: \((q_{i, j}, p_{i, j})\) on the recorded trade date. Your cost matrix holds all the bets from \(i\) to \(n\) taken over the life of the portfolio. Another expression for all these bets would be their sum as a vector: \( \displaystyle{\sum_i^n (q_{i, j} \cdot p_{i, j})} \)

It took me 197 pages to explain those variables. It would be difficult to describe what they do in a few sentences and out of context. They were added one by one as the development cycle progressed and as it was chronicled in the forums. Nonetheless, the objective is to maximize the payoff matrix: \(\sum (H∙g(t)∙ΔP)\) where \(g(t)\) is this generalized function which we try to master.

I go for \(g(t)\) as an exponential function, giving for payoff matrix: \(\sum_i^t (H∙(1 + \bar r(t))^t∙\Delta P) \) which should, at least, outperform market averages over the long term: \(Σ(H∙(1 + r(t))^t∙ΔP) > Σ(H_{(spy)}∙ΔP) \). But that is my choice. Someone else might like it more subdued or be even more aggressive. That would be their choice.

The point is: trying to control the where you want to go by managing the ongoing inventory while trading over some core positions and using the generated profits to increase the number of trades and bet size. You create this positive feedback loop which will have an impact from start to finish on your portfolio since every penny you make will be reinvested. And because these variables are compounding, it will accelerate the performance of your portfolio to the point that your equity line will come to display its constructed or reengineered exponential nature.

You could already design your objectives using the portfolio equation: \(F(t) = F_0 \cdot (1 + \bar g(t))^t \) from which you can extract the needed CAGR to get there: \( CAGR = \frac{F(t)}{F_0}^{(1/t)} -1 \). It is up to you to design your trading strategy to get there.

You need to design the functions in between, modulate them to market swings and make sure that your bet size is growing as your portfolio grows. The long equation in my last post should help give you ideas.

Regardless, you could start where I did, that is without those controlling functions. This would reduce the equation to:

\( \displaystyle{q_{i, j} \cdot p_{i, j} = \frac{F_0 \cdot (1 + \bar r_m )^t}{max(j)}} \)

where \(F_0 \) is your initial capital, \(\bar r_m\) is the average market return and \(max(j)\) is the number of stocks in your portfolio. Even as a reduced equation, it still suggests to make your bet size grow at the same rate as the average market. And because of \(max(j)\), you are in a fixed fraction of portfolio trading environment which will assure scalability.

The above described trading strategy uses the CVXOPT optimizer.

First, let it be said. It is extremely difficult to extract some decent alpha using an optimizer.

The optimizer can only give you what it sees and you have no control how it will trade.

It will simply do its thing. That is: determine when to buy and sell, at what prices and in what quantities.

In fact, in using an optimizer, you delegate the whole trading process to this mathematical contraption. I wrote a book on the above-cited trading strategy as a demonstration that it was feasible. It concluded with the above-displayed results. Yet, it seems as if no one is exploring these extraordinary possibilities.

I am very critical of my work. I do not let anything pass. It is only with a backtest over some long-term trading interval and under harsh trading conditions (that is including all frictional costs) that I might accept the outcome of a trading strategy as something that might be worthwhile.

However, from my observations, it goes like this: if you want more than your peers, then you will have to do more. It is not by looking at the problem the same way as everyone else and then trying to duplicate what they did that you will get different results. It can only lead to about the same results or a variation on a theme.

A simulation should be designed to answer: what would have happened IF... You have no access to future data so you use past data as simulation ground trying to extract some justifiable alpha out of this big blob of market data subject to the whims of quasi-unpredictable variance.

However, the trading rules and procedures you set in your backtest will nonetheless be hard-coded in your program and therefore will also do the same thing they are programmed for going forward. This does not mean you will get the same answer or the same performance level, only that whatever triggered a trade in the past will also be triggered going forward should the same conditions be met.

A trading program is like a bounded obstacle course where a trade is triggered each time the price hits a wall or cross a preset barrier. You do not know when such a parameter crossing will occur but you do know that if it is crossed, a trade will be triggered and thereafter it will generate an unspecified profit or loss.

Some think that because they cannot achieve very high returns on their own that it is impossible for anyone else to do so.

Well, the results above, using the CVXOPT optimizer definitely disagrees with that statement. You can do more, but it will require that you actually do more, even if it is with what I would call amazingly simple math wizardry.

Improving overall portfolio performance over the long term is not that hard to do. However, you will need a long-term vision of things to do so.

We all know the future compounding value formula: \(Cap. \cdot (1+r)^t\).

Say you want your long-term portfolio performance to produce twice as much as it could and wonder how much more return, or effort, would be needed to accomplish the task.

This is requesting: \(2 \cdot Cap. \cdot (1+r)^t = Cap. \cdot (1+r+g)^t\). And the question is what is the value of \(g\) over the trading interval \(t\)?

For instance, on \(Cap. = 1,000,000\), after one year we would be requesting with \(r=0.10\) that the outcome be: \(2,200,000\). This is not just doubling \(r\), it is doubling the outcome. To do this, you would need \(r+g = 120\%\) or \(g = 110\%\) !

Understandably, our trading systems do not generate that kind of performance at will. However, if you give it time, the problem gets more trivialized, meaning that the added effort to double one's performance might not require that high a \(g\) number.

The formula, for \(t = 20\) would require to add \(3.38\%\) to the \(10\%\) base scenario. Meaning that a CAGR of \(13.38\%\) would be sufficient to double one's outcome compared to the base \(10\%\) return.

If you add more time, like going for \(t = 30\) years, the added push will drop to \(2.57\%\). And going for 40 years, a \(g = 1.92\%\) would have done the job. This added \(g\) is acting over the portfolio's entire trading interval. To put this in perspective, it is like requesting an added 2% to your average profit target. On a 100-dollar stock, that is requesting 2 dollars more for an exit. Doing simulations on this, you should observe that most of those requests would be honored. It is where a long-term vision of things can leave its mark.

You know, even before you start, that the added effort in reaching for more might not be that elusive or that hard to get. But, you will need to plan for it, and most probably, need to reengineer your trading strategy.

Adding \(g\) is comparable to adding some long-term \(\alpha\), and it is generated by your own program routines. You have an example of applying such procedures in this thread (see my posts above) where I reengineered a trading strategy to produce more.

BTW, it is the same problem that you have \(Cap. = 10,000,000\), \(Cap. = 100,000\), or \(Cap. = 1,000,000,000\) to manage, \(g\) is the same.

The above trading strategy was reengineered to be controllable. We can be more aggressive by adding more pressure to its controlling functions, or slow it down at will if we consider it too much or feel it is going to fast. It is part of the advantage of having controllable portfolio level functions rather than having adaptive or fixed trading parameters. It remains a compromise between individual preferences and maximizing long-term objectives.

To see more on how this trading strategy evolved (its development was chronicled live) see my posts in the following thread: https://www.quantopian.com/posts/built-robo-advisor where, step by step, this strategy was modified and enhanced from underperforming its own Buy & Hold scenario to the point of making it an awesome long-term performer. There is no magic trick in there, but a lot of math was applied in its reengineering.

When we look at the trading problem with a long-term perspective, we start to see things that are entirely related to how our trading strategies are structured and how they will behave over time.

It was shown in the last post that to double overall portfolio performance only required adding peanuts, alpha-wise, to a strategy on the condition we were ready to give it time. It is a compounding game with the end-game formula: Cap. ∙ (1 + r)^t. Therefore, all the emphasis should be on r, t, and the initial capital.

However, this basic scenario can be enhanced by adding some alpha (here given as g since it can be self-engineered): Cap. ∙ (1 + r + g)^t.

As example, the 30-year scenario starting from a 10% CAGR base only needed g = 2.57% in added return to double its outcome. The same base scenario would need g = 4.10% to triple it over those same 30 years. Only 1.53% more alpha would enable one to triple the overall outcome compared to just doubling it.

The following chart shows the value of g needed given the number of years required to generate twice its outcome without it.

Technically, because we are giving it time, it becomes like adding peanuts to the trading strategy. And from it, we could greatly improve performance. The chart shows that if you spread the task over a longer trading interval, it becomes easier and easier to achieve.

In numbers, starting with the usual $10,000,000 cap. in Quantopian scenarios with a 10% CAGR as base, the end value would be $174,494,023 after 30 years. Adding g = 2.57% would raise the total to $348,988,045, while using g = 4.10%, it would produce as outcome $523,482,068. Money-wise, this is not peanuts anymore!

You could take the above scenario and multiply it by 10. Simply add a zero to the initial capital and the outcome will be in billions, r and g would remain the same. This shows how valuable the added g can be to a trading portfolio.

You prefer to scale it way down by a factor of 1,000, it would still work, g, and r would remain the same, but then, be prepared to drop the last three digits and literally play with peanuts and for peanuts, as if throwing away its tremendous potential due only to lack of capital.

What the governing portfolio strategy equation presented in a prior post said is that we are not restricted to use only one bag of peanuts, we could throw in several of them in at a time to make it more interesting, giving: Cap. ∙ \( (1 + r + g_1 + g_2 + g_3 + g_4)^t\).

A little nudge from routine #1 when and where applicable, another from routine #2 for its supportive presence, a little push here and there for the other enhancers, amplifiers and dampers you put in the trading strategy. All those bits and pieces have for purpose to gradually elevate overall portfolio performance over the long term where it counts.

Yet, their individual contributions might be barely visible over the short-term. Their power resides in having been continuously applied in a compounding environment for a long time. A little peanuts here and a little more there, it all adds up. All those small profits we feed our trading strategy will be compounding and compounding enabling our strategy to trade more and thereby profit even more.

We can design our stock trading strategies to do whatever we want. However, most often it just turns out to be whatever we can. These strategies could be based on about anything as long as they remain relevant to our intended objectives. Also, they actually have to be feasible in the real world and be able to survive over the long term.

What is the use of a stock trading strategy that will blow up in your
face at some time in its near future and completely destroy your
portfolio? How about if it is not even designed to outperform market
averages?

In my previous post, it was shown that if the engineered alpha is spread out over time, it might not take that much to double or even triple a strategy's outcome. The formula is simple enough: \(2 ∙ Cap. ∙ (1 + r)^t = Cap. ∙ (1 + r + g)^t\). Even if the base return (r) was 20% as with Mr. Buffett's long-term CAGR, for instance, an added g = 2.80% would be sufficient to double the outcome while adding g = 4.48% would triple it. It is not a question of doubling the 20% CAGR, it is simply increasing it by 14% to double the outcome, or by 24% to triple it over those same 30 years.

To keep (r) as the market's average return and secular trend, I will use alpha for the above-average return. And from there add this manufactured g to the formula: \(2 ∙ Cap. ∙ (1 + r + \alpha )^t = Cap. ∙ (1 + r + \alpha + g)^t\). In Mr. Buffett's case, both the alpha (α) and the average market return (r) would be at 10%. And therefore, \(r + \alpha = 0.20\).

Money-Wise, It Makes Quite A Difference

The $10,000,000 Quantopian initial capital scenario would grow to $2,373,763,138 without the added g. Doubling would generate $4,747,526,276 while tripling the outcome would produce $7,121,289,414. Almost $5 billion more from the added and self-engineered g = 4.48% (refer to A Long-Term Perspective II for the r = 10% base return scenario).

This is on the assumption of having a long-term view of the portfolio management problem. First, you must have the objective of lasting that long, meaning that your trading strategy will not blow up before reaching its destination. And second, you need to achieve this 20% \(( r + \alpha )\) long-term average CAGR over the period before adding g.

Let's start by saying that you might not be Mr. Buffett. On the other hand, adding some g to Mr. Buffett's alpha scenario would definitely produce more.

This growth factor g is just another way of expressing some added alpha. Here, this alpha is being engineered from within based on the structure of the trading strategy and its trading behavior. We are trying to extract manufactured alpha from the trading mechanics, the very process by which we are building our long-term portfolio.

This is to show how little is required to jack up performance. It might not even require being better at predicting what is coming next. You could set up functions designed to control what you want to see, and how your trading strategy should behave depending on what is thrown at it.

The CVXOPT Optimizer

The above trading strategy relies on the CVXOPT optimizer to make its trades. It is a “black box” you feed with your portfolio weights. After CVXOPT's optimization routine, it will execute its trades and rebalance as closely as possible to your provided weights.

Should the optimizer sees nothing that is actionable, it will do nothing. In fact, should all price series provided be random, it will answer with a zero-return portfolio (this has been demonstrated in detail in other notes and in my previous book).

We would have to conclude that price series are not totally random (they do have trends), but nonetheless, remain random-like to a great extent. The optimizer will detect and rebalance its stock weights on any trend of some duration within a price series no matter its origin.

So, Where Do You Get This Extra g?

Evidently, from the trading strategy itself.

In this case, I opted to literally force-feed the optimizer with my special diet of mathematical functions (refer to Reengineering For More III for a description).

This is not optimizing the optimizer or trying to make better predictions, or trying to find better factors or alphas. This does not fit the over-fitting or curve-fitting conundrum either. It is force-feeding the optimizer with what you want to see using mathematical function based on administrative procedures that can be independent of technical indicators, fundamentals, pattern recognition or alpha generation factor analysis.

You simply tell the optimizer: follow this mathematically fabricated trend the best you can! That is: follow this new payoff matrix: \(\sum (H ∙ (1 + g(t))^t. ∙ \Delta P) \) where g(t) is this intricate function applied over the life of the portfolio (again see Reengineering For More III for its description).

Having g(t) positive is sufficient to elevate the outcome of the whole payoff matrix from start to finish. And this outcome will depend on how strong your want g(t) to be and how much capital is at your disposal over the life of your portfolio. The above expression will also force an exponential equity line as illustrated in the 15-year equity line snapshot in Reengineering For More.

Some think that the above trading strategy is total BS. Well, it is not. It is, however, an innovative trading methodology with a long-term perspective. It is simply different from what you usually see.

Force-Feeding The Optimizer

This trading strategy is just the outcome of force-feeding equations to the mathematical contraption called the CVXOPT optimizer. The equations, being something you designed, can evidently be put under your control. This is what makes this trading strategy so remarkable. It uses the same tools as everybody else designing trading systems on Quantopian, yet it displays a much higher payoff matrix.

At the very least, it is a demonstration that it can be done.

BTW, what was shown is just the preliminary phase, the exploration phase of the possibilities and potential of having this long-term vision of the portfolio management problem. It leads to designing even better systems made to respond to other considerations if need be (read by adding more equations).

I see this as just the beginning of a different and innovative approach to building a long-term portfolio.

Related articles:

A Long-Term Perspective

A Long-Term Perspective II

The protective measures used in the above trading strategy were aimed at reducing market involvement during periods of market turmoil. This resulted in either being partially short the market or simply out of it during significant market downturns.

The method used is a simple variation on a theme. The same as the execution of a trailing stop loss at the portfolio level. And when looking at it closely, the intervals affected are about the same as other stop loss methods one might want to use, or at least, should use for that type of strategy. The triggering boundaries will be different, as if having fuzzy timing, but the overall objective would be about the same: trying to avoid major drawdowns which, btw, occur over about the same time intervals for everyone.

The purpose is not only to avoid major drawdowns. It is to have code in place for the next time something similar will happen. We can learn from the dynamics of a crisis, like the 2008-2009 financial crisis, to make our programs more resilient and be able to protect us should it happen again.

If there were no such protective code, it is not the program that will auto-generate it when needed nor the market be so kind that it would spare us for not doing our job.

In my own program, I got about the same switching periods as in the following @Vladimir notebook: https://www.quantopian.com/posts/quantopian-based-paper-on-momentum-with-volatility-timing#5d50fff1ff1ca805a8e84a8f. Avoiding the 2008-2009 financial crisis was the main protective measure while the others, even though minor, were still needed.

In @Vladimir's notebook, we can see a progression as he changes some aspects of the trading methods. Each designed to generate larger payoff matrices. As if, you want more, then you have to do more.

For instance, at one point, instead of doing nothing during periods of market turmoil, he switched to buying bonds which turns out to be a better choice than doing nothing. Also, as the switching method became more refined, his strategy responded sooner to market changes which in turn improved his overall performance while reducing volatility.

Developing an automated stock trading strategy is an iterative process. You do not program it all in one shot. You do it one step at a time, feature by feature. And at each step, you expect to have bugs which will be fixed before you proceed to the next feature.

Those methods of thinking are available to all. Looking at the overall problem, you should get to about the same conclusions as others. You can always improve your trading strategy by changing its trading methods to the better. But, evidently, it is up to you to make those modifications or not. It is always a matter of choice and you are at the center of it all.

I do things differently than most, and I should expect to get different answers. If you have to innovate or reengineer your trading strategy to reach higher levels, then, simply do it.

The Origin Of Stock Profits

When designing automated stock trading strategies it is mainly to outperform other available methods of portfolio management including other automated strategies. You can go to outperform over the short term where you will find a lot of what should be considered market noise (unpredictability or volatility or randomness or whatever you want to call it). Or, go for the longer term where the prevailing long-term market trend will be more visible.

Most of those trading strategies try to find corroborating evidence from past market data to extract some predictability from what could often simply be parts of this underlying long-term trend, the market's secular trend as can be seen in long-term market averages (for example say the SPY over a 30-year time span).

You can break down the underlying price movement into components or factors or profit-sources or whatever you want to call them. But what you could find might be just some fractions of this underlying long-term market trend.

For instance, using principal component analysis, one could get the following chart where each factor contributes to the overall performance. However, when you add all the factors together, they might explain most of the variance, but they do not exceed the old standby: the market average benchmark.

Therefore, what have you detected using those factors, if not just pieces of the underlying market trend? But, this market trend was available by doing no trading at all just by buying the SPY. So, why go to all the trouble of trading so much to get less than doing nothing at all except initiating buying SPY and then sitting on your hands?

One should not consider those factors as alpha-sources, but merely as profit-sources or components of the underlying secular trend where factor 1 explains the larger part of price variance followed by factor 2 and so on. It is like looking at the Fama-French factors which lead to the efficient market portfolio residing on the efficient frontier which turns out to be the actual market average which again could be expressed in some benchmark like SPY.

To exceed market average performance, you need some real alpha which should be measured as the excess over and above the market average. The following chart expresses this point:

The above chart shows alpha factors by order of desirability (the higher the better, evidently). For example, alpha 3 represents a 20% CAGR over the 30-year period. It also corresponds to Mr. Buffett's average long-term CAGR. Alpha 2 is 2% higher (22%) and alpha 1 is 4% higher (24%). Alpha 1 and alpha 2 show how much of a difference those added 2% can make in the overall scheme of things. They are more desirable, but also much harder to obtain.

To get there, you will have to do more than just sit on your hands. You will have to be actively trading, or have better predictive tools than your peers. Because this is a compounding return game, giving more time or getting a higher alpha factor above alpha 1 will push overall performance even higher.

But time also gets to be a major player in this game. Keep everything the same and just add 10 more years to the above scenarios. The first graph (the factor chart above) would now look like the following where the time horizon has been increased to 40 years:

The picture did not get any better, in fact, the sum of the factors still did not reach the expected average market return. A return that could be achieved by doing absolutely nothing other than sitting tight and waiting. The spread between the average market return (SPY) and the best of the factors (factor 1) is increasing. The same with all the other factors showing that even with positive results their sum is much less than the average benchmark. In fact, the total for all 7 factors is only 56.5% of what SPY could have paid off while doing no work at all.

So, when you see some factor analysis of some kind, what is it that they are grabbing? Is it part of this secular trend which is like given away for free, or is it just the remnants of this secular trend? Are they not nonetheless underperforming the available averages?

In a Quantopian simulation, this type of chart is often seen where all factors are combined into a single one resulting in the following look:

There is nothing wrong with such a chart as long as it is what you were looking for subject to whatever constraints you wanted your portfolio to adhere to. However, it should be noted that those constraints come at a cost. And it is the difference between the market averages and the sum of the factors as illustrated in the above chart. The final result remains, it is a trading strategy that is underperforming its benchmark.

However, what I think is more “important” is adding time to the alpha scenario. It is where we see the power of compounding. For instance, alpha 1 is 120.5 times larger than the market average (SPY). And yet, it is only at a 24% compounding rate of return. 4 alpha points above Mr. Buffett's long-term CAGR. Imagine if your alpha 1 was at an even higher setting.

As a basic tenant to any automated stock trading strategy, the main objective should always be to outperform the market averages no matter what or how you want to do it. Otherwise, you need pretty compelling reasons to adopt the factor analysis counterpart where even if your objective is having low volatility you might find that the opportunity cost of that particular objective can be quite high. Simply compare the alpha 1 in the last chart to the sum of all factors in the 40-year factor chart. One can execute either chart.

I think that all the effort in designing automated trading strategies should be concentrated in extracting the highest alpha you possibly can over the longest time interval. And this can be done by looking at the problem from an alpha perspective and not necessarily from a profit-factor perspective.

It is always a matter of choices. We remain the designers of our own automated trading strategies.

Hi @Guy,

The second to last diagram above looks like one of mine (from the QUALITY factor composite I believe). If it is, I think it's important to highlight that this is before any leverage applied, and that you're only looking at one side of the coin, namely (absolute) returns. If we can agree that volatility is a reasonable measure of risk, the chart directly above it in the tear sheet would be a better apples-to-apples comparison. In this case, per unit of risk, my strategy/factor-composite is not doing too poorly, right? :)

@Joakim, the chart is indeed from your notebook. It was taken as a general example of what other similar charts look like when compared to a benchmark like SPY.

However, I do understand your point of view. Low volatility can be of some use in some positive trading strategies especially in an environment of low or negative interest rates. Such strategies have their importance. But, I want more...

We can build portfolios to do whatever we want with the tools we have. Evidently, there is a limit to it, but nonetheless, the prime objective should not be to have the lowest possible volatility. That should come after a higher priority.

But even under those low-volatility conditions, whatever trading methods used, it should still be to outperform the averages which here is presented as the SPY benchmark.

In my own strategy design, I use over-compensation extensively as described in another post in order to remedy CAGR degradation over time which is mainly due to the Law of diminishing returns. I also use other return enhancer techniques to increase overall performance. In doing so, I do accept more volatility, higher drawdowns, lower turnover, and some leveraging. However, in seeking this volatility, I can extract higher long-term profits and force an exponential equity curve due to the mechanics of the trading system.

The following chart is the same chart as yours from the first simulation in this thread.

It looks pretty much like yours except for the scale. But, as I said, my program is seeking volatility and not avoiding it. It is why it has protective measures in order to alleviate the impact of drawdowns. Note that for me, it is still a work in progress. I think I could do even better just as I think that someone else could do even better than I can.

The cumulative returns on a logarithmic scale gave the following:

Again, one should look at the scale. The spread between the backtest and SPY is a measure of the alpha generation, the return over and above the benchmark. In all, both charts are quite impressive, especially having lasted over 14 years producing rather stable equity lines considering the trading environment. Note that the highest drawdown was during the financial crisis, but on the chart, it is barely visible.

As you can observe, I am not designing for the contest. The only constraint my program would adhere to is the one requiring positive profits. But, as I often say: we all have to make choices.

Note: "This following post has been moved here to make it easier to follow the next one since it will reference some of the math expressed here."

As I understand, Quantopian's objectives can all be expressed in broad lines of thought. Their prime objective is to maximize their multi-strategy portfolio(s). I view it as a short-term operation like trying to predict some alpha over the short-term (a few weeks) where long-term visibility is greatly reduced and adopt a "we will see what turns out attitude" with a high probability of some long-term uncertainty.

I would prefer that this portfolio optimization problem be viewed as a long-term endeavor where their portfolio(s) will have to contend with the Law of diminishing returns (alpha decay), that the portfolios compensate for it or not. It is a matter of finding whatever trading techniques needed or could be found to sustain the exponential growth of their growing portfolio of strategies.

A trading portfolio can be expressed by its outcome:\(\;\) Profits \(\,\) = \(\displaystyle{\int_{t=0}^{t=T}H(t) \cdot dP}\)

The integral of this payoff matrix gives the total profit generated over the trading interval (up to terminal time T) whatever its trading methods and whatever its size or depth. Saying that it will give the proper answer no matter the number of stocks considered and over whatever trading interval no matter how long it might be (read over years and years even if the trading itself might be done daily, weekly, minutely, or whatever).

The strategy \(H_{mine}\) becomes the major concern since \(\Delta P\) is not something you can control, it is just part of the historical record. However, \(H_{mine}\), will fix the price at which the trades are recorded. All those trading prices becoming part of the recorded price matrix \(P\).

You can identify any strategy as \(H_{k}\) for \(k \subset {1, \dots, k} \). And if you want to treat multiple strategies at the same time, you can use the first equation as a 3-dimensional array where \(H_{k}\) is the first axis. Knowing the state of this 3-dimensional payoff matrix is easy: any entry is time-stamped and identified by \(h_{k,d,j}\) thereby giving the quantity held in each traded stock \(j\) within each strategy \(k\) at time \(t\).

How much did a strategy \(H_{k}\) contribute to the overall portfolio is also easy to answer:

\(\quad \quad \displaystyle{w_k = \frac{\int_{0}^{T} H_{k} \cdot dP}{ \int_{t=0}^{t=T}H(t) \cdot dP}}\)

And evidently, since \(H(t)\) is a time function that can be evaluated at any time over its past history the weight of strategy \(w_{k}\) will also vary with time.

Nothing in there says that \(w_{k}\) will be positive. Note that within Quantopian's contest procedures, a non-performing strategy (\(w_{k} < 0 \)) is simply thrown out.

Understandably, each strategy \(H_{k}\) can be unique or some variation on whatever theme. You can force your trading strategy to be whatever you want within the limits of the possible, evidently. But, nonetheless, whatever you want your trading strategy to do, you can make it do it. And that is where your strategy design skills need to shine.

Quantopian can re-order the strategy weights \(w_{k}\) by re-weighing them on whatever criteria they like, just as in the contest with their scoring mechanism and declare these new weights as some alpha generation "factor" with \(\sum_1^k a_k \cdot w_{k}\). And this will hold within their positive strategies contest rules: \( \forall \, w_k > 0\).

Again, under the restriction of \(\, w_k > 0\), they could add leveraging scalers based on other criteria and still have an operational multi-strategy portfolio: \(\sum_1^k l_k \cdot a_k \cdot w_{k}\). The leveraging might have more impact if ordered by their expected weighing and leveraging mechanism: \(\; \mathsf{E} \left [ l_k \cdot a_k \cdot w_{k} \right ] \succ l_{k-1} \cdot a_{k-1} \cdot w_{k-1} \). But, this might require that their own weighing factors \(\, a_k \) offer some predictability. However, I am not the one making that choice having no data on their weighing mechanism.

Naturally, any strategy \(H_{k}\) can use as many internal factors as it wants or needs. It does not change the overall objective which is having \(\, w_k > 0\) to be considered not only in the contest but to have it high enough in the rankings to be considered for an allocation.

Evidently, Quantopian can add any criteria it wants to its list including operational restrictions like market-neutrality or whatever. These become added conditions where strategy \(H_{k}\) needs to comply with, otherwise, again it might not be considered for an allocation.

The allocation is the real prize, the contest reward tokens should be viewed as such, a small for "waiting" reward for the best 10 strategies in the rankings: \( H_{k=1, \dots, 10}\,\) out of the \( H_{k=1 \, , \dots, \, \approx 300}\) participating.

From what preceded, all the attention should be put on strategy \(H_{mine}\) or \(H_{yours}\) depending. I will use its generic format \(H_k\) for whatever it might be in the gazillions of choices. The task is to design a trading strategy that will exceed average long-term market expectation and then some as an added reward for all the work done and the skills brought to the game.

What should be the nature of this trading strategy since it can be anything we want? The ultimate goal or objective should be to have this strategy \(H_k\) outperform its benchmark by a wide margin: \(\int_0^T H_k \cdot dP \gg \int_0^T H_{spy} \cdot dP \). It will not be instantaneous, evidently. Building a portfolio is really a long-term endeavor. One thing you do not want to see is "crash and burn".

A Black Box

A major constraining strategy design element might be the requirement of using an optimizer to do the trading. That it be Quantopian's Optimizer API or the CVXOPT optimizer, both present the same kind of environment: the trading activity is delegated to a "black box".

The more constraints we will put on this black box, the more it will fail to produce high returns. As if saying that because we are using an optimizer, our trading strategy might produce lower returns over the long haul as if by default, or more appropriately, viewing it as: it is all it could do. A trading strategy has a structure, even if it is fuzzy and chaotic. Its behavior can be averaged out when you have a high number of trades.

What Can an Optimizer See?

Either cited optimizers will only detect what they can see. Should there be no trends (short or mid-term), the optimizer will answer with a flat zero. And that is not a way to generate profits, no matter what anyone might say. Also, the optimizers will not see beyond the data's roll-back window under consideration. That mathematical contraption cannot extract blood from a stone either, just as I can't.

Nonetheless, these two optimizers, especially if trading targets are used, can be shoved or pushed around. By feeding them a special weighing diet of your concoction, you can force them to behave differently. In a way, forcing down their throats your own objectives, your own agenda, even in an uncertain stochastic and chaotic trading environment.

Forcing Your Optimizer

For instance, one of my high-flying strategies highlighted in my latest book used the CVXOPT optimizer for its trading. The strategy demonstrated returns way above market averages, not only over a couple of years, but over a 14-year period. The outcome was not a random occurrence or some luck factor. It was simply feeding the optimizer with prepackage weights designed by yours truly. I used the word simply because it was exactly that. An overview of this can be seen in this thread.

There were no real factors per se in that trading strategy, but price was at the center of it all. What I wanted my equations to do was follow its given directive: \(\int_0^T H_k \cdot (1+g)^{t} \cdot dP\). This implied that the bet size would grow exponentially, and as a "side effect", it would compensate for the alpha decay due to the Law of diminishing return. In fact, I over-compensated for the alpha decay by doing more than illustrated in the graph in the following post: https://www.quantopian.com/posts/quality-factors-composite-feedback-requested-please#5d67dad066ea457e47eb5342

What was "required" was finding some "excuse" that would trigger more trades more often and to increase the average net profit per trade as the portfolio grew in size. Both these task were relatively easy. It was all part of the inner workings of the above equation.

The NET Average Profit Thing

Your trading strategy \(H_k\), whatever it is, will have an average net profit per trade at termination time. This is illustrated in the following chart (from one of my books) as a normal distribution (blue line) with its average return \(\mu\). In reality, it is not a normal distribution (it has fat tails, high kurtosis, and skewness) but for illustrative purposes, it is close enough.

The task is to move the whole distribution to the right as a block by changing its center of mass, and at the same time give it a higher density. This means having the trading strategy do more trades with a higher average net profit per trade. Simply doing that will compensate for return degradation. The farther right you move the distribution the better, evidently, within all the portfolio constraints. BTW, by my estimates, the Law of diminishing returns will slowly catch up and kick back in after some 20 to 25 years. It gives me ample time to figure out ways to give the strategy another boost upward.

It is not that you are predicting where the market is going (except maybe in a general sense), it is predicting what your behavior to market changes will be. And putting your strategy on steroids: \(\;(1+g_t)^t \cdot \int_0^T H_k \cdot dP\;\) will do just that. Expressed in this fashion, it makes it explicit that you are the one providing the upward thrust (the steroids) to your existing trading strategy by making it trade more for a higher average profit. Slowly at first, but increasing the pressure over time. You want to outperform your peers, then you will definitely have to do more than they do. It is not by doing the same thing or some variant thereof that you will do better. Note that all this might require that the trading problem be looked at with a different mindset.

Trading on an Excuse

In Reengineering For More, the strategy used the AverageDollarVolume over the past 4 months as a factor. The rationale appears simple enough: if a stock trades a lot with a high AverageDollarVolume, then it is liquid and most probably part of the higher capitalization stocks. However, the AverageDollarVolume has very little predictive powers. First, it was at least 2 months out of date, meaning that trades would be taken based on what was a baseline average 2 months prior. Under these circumstances, the AverageDollarVolume should be considered as almost a random number. It goes like this: how much value the AverageDollarVolume of 2 months ago is able to tell you what the price of a stock will be tomorrow, next week, or next month for that matter?

The last expression has moved the responsibility of enhancing your trading strategy directly into your hands. I use pressure points to enhance performance, but there a multitude of other techniques available to do an equivalent or better job.

Should you go that route too? I am not the one that should answer that question. We all have to make choices.

The Automated Stock Selection Process

The stock investment universe is huge. However, for an automated trading strategy, it is even bigger since every day will offer this entire universe to choose from. The buy \( & \) holder will make one choice on a number of stocks and stick to it. Whereas for the automated trader, each day requires a trading decision for each stock in the portfolio, that it be to hold, buy, or sell from this immense opportunity set. There is a need to reduce it to a more manageable size.

Say, you buy all the stocks in \( \mathbf{SPY} \) equally (bet size \( \$ \)20k per trade on a \( \$ \)10M portfolio). That strategy is \( \sum_1^{500}(h_{0j} \cdot \Delta P) \) where \( h_{0j} \) is the initial quantity bought in each stock \( j=1, \dots, 500 \).

Evidently, the outcome should tend towards the same as having bought \( \mathbf{SPY} \) alone \( \sum_1^{500}( h_{0j} \cdot \Delta P) \to \sum (H_{spy} \cdot \Delta P) \). At least, theoretically, that should be the long term expectation, and this looking backward or forward for that matter.

\(\quad \quad \displaystyle{\mathsf{E} [ \sum_1^{500}(h_{0j} \cdot \Delta P) ] \to \sum (H_{spy} \cdot \Delta P)} \)

Saying that the expected long-term profit should tend to be the same as if having bought \( \$ \)10M of \( \mathbf{SPY} \) and held on. Nothing more than a simple Buy & Hold scenario.

However, you decide to trade your way over to this decade long interval with the expectation that your kind of expertise will give you an edge and help you generate more than having held on to \(\mathbf{SPY}\). But, the conundrum you have to tackle is this long term notion which says that the longer you play, the more your outcome will tend to market averages.

Your Game Expectations

To obtain more, you change your "own" expectation for the game you intend to play to: \( \mathsf{E} [ \sum (H_k \cdot \Delta P) ] > \sum (H_{spy} \cdot \Delta P) \) by designing your own trading strategy \( H_k \). I would anticipate for you to want more than just outperforming the averages, much more, as in: \( \mathsf{E} [ \sum (H_k \cdot \Delta P) ] \gg \sum (H_{spy} \cdot \Delta P) \).

It is not the market that is changing in this quest of yours, it is you by considering your opportunity set of available methods of play. You opt to reconfigure the game to something you think you can do.

You have analyzed the data and determined that if you had done this or that in a simulation, it would have been more profitable than a Buy \( & \) Hold scenario.

Fortunately, you also realized soon enough that past data is just that past data, and future data has no obligations to follow "your" expectations. Nevertheless, you can study the past, observe what worked and what did not, and from there design better systems by building on the shoulders of those you appreciate the most.

Can past data have some hidden gems, anomalies? Yes. You can find a lot of academic papers on that very subject. But those past "gems" might not be available in future data or might be such rare occurrences that you could not anticipate the "when" they will or might happen again. Another way of saying that your trading methods might be fragile, to put it mildly.

A Unique Selections

The content of \( \mathbf{SPY} \) is a unique set of stocks. In itself, a sub-sample of a much larger stock universe. Taking a sub-sample of stocks from \( \mathbf{SPY} \) (say 100 of its stocks) will generate another unique selection.

There are \( C_{100}^{500} = \frac{500!}{100! \cdot 400!} = 2.04 \times 10^{107} \) combinations in taking 100 stocks out of 500. And yet, the majority of sets will tend to some average performance \( \sum (H_n \cdot \Delta P) \to \sum (H_{spy} \cdot \Delta P) \) where \( n \) could be that 1 set from the \( 2.04 \times 10^{107} \) available. Such a set from the \( \mathbf{SPY} \) would have passed other basic selection criteria such as: high market caps, liquidity, trading volume, and more.

No one is going to try testing all set samples based on whatever criteria or whatever method. It would take millions of lifetimes and a lot more than all the computing power on the planet. \( 10^{107} \) is a really really huge number. The only choice becomes taking a sub-sub-sub-sample of what is available. So small, in fact, that whatever method used in the stock selection process, you could not even express the notion of strict representativeness.

To be representative of the whole would require that we have some statistical measure of some kind on the \( 10^{107} \) possible choices. We cannot express a mean \( \mu \) or some standard deviation \( \sigma \) without having surveyed a statistically significant fraction of the data.

The problem gets worse if you considered 200 stocks out of the 500 in the \( \mathbf{SPY} \). There, the number of combinations would be: \( C_{200}^{500} = \frac{500!}{200! \cdot 300!} = 5.05 \times 10^{144} \)! This is not a number that is 35\( \% \) larger than the first one. It is \( 10^{37} \) times larger.

We are, therefore, forced to accept a very low number of stock selection sets in our simulations. Every time we make a 100-stock or 200-stock selection we should realize that that selection is just 1 in \( 2.04 \times 10^{107} \) or 1 in \( 5.05 \times 10^{144} \) respectively. But, that is not the whole story.

Portfolio Rebalancing

If you are rebalancing your 100 stocks every day, you have a new set of choices which will again result in 1 set out of \( 2.04 \times 10^{107} \). This to say that your stock selection can change from day to day for some reason or other and that that selection is also very very rare. So rare, in fact, that it should not even be considered as a sample, not even a sub-sub-sub-sample. The number of combinations is simply too large for any one selection to be made representative of the whole, even if in all probability it might since the majority of those selections will tend to the average outcome anyway.

As a consequence, people do simplify the problem. For instance, they sort by market capitalization and take the top 100 stocks. This makes it a unique selection too, not a sample, but a 1 in \( 2.04 \times 10^{107} \). Not only that, but it will always be the same for anyone else using the same selection criterion. As such, this "sample" could not be considered as representative of the whole either, but just as a single instance, a one of a kind. It is the selection criteria used that totally determined this unique selection. It is evidently upward biased by design and will also be unique going forward.

Making such a stock selection ignores \( 2.04 \times 10^{107} - 1 \) other possible choices! Moreover, if many participants adopt the same market capitalization sort, they too are ignoring the majority of other possible selection methods, and making them deal with the very same set of stocks over and over again whatever modification they make to their trading procedures.

The notion of market diversity might not really be part of that equation. It is the trading procedures and the number of stocks used that will differentiate those strategies. But, ultimately, it leads to some curve-fitting the data in order to outperform! And that is not the best way to go.

Reducing Volatility

You want to reduce volatility, one of the easiest ways is to simply increase the number of stocks in the portfolio. Instead of dealing with only 100 stocks, you go for 200! Then any stock might start by representing only 0.5\( \% \) of the total and therefore, minimize the impact of any one stock going bad. The converse also applies, those performing better will have their impact reduced too. Diversifying more by increasing the number of stocks will increase the number of possible choices to \( 5.05 \times 10^{144} \). Yet, by going the sorted market capitalization route, you are again left with one and only one set of stocks for each trading day.

If there are \( 2.04 \times 10^{107} \) possible 100-stock portfolios to chose from, then whatever selection method used might be less than representative. We are not making a selection based on the knowledge of the \( 2.04 \times 10^{107} - 1 \) other choices, we are just making one that has some economic rationale behind it. The largest capitalization stocks have some advantage over others for the simple reason they have been around for some time and were, in fact, able to get there, meaning reaching their high capitalization status.

Over the past 10 years, should you have taken the highest capitalization stocks by ranking, you would have found that most of the time the same stocks were jockeying for position near the top. Again, selecting by market capitalization led to the same choice for anyone using that stock selection method. Since ranking by market cap is widespread amongst portfolio managers, we should expect to see variations based on the same general theme.

Selection Consistency

Here is a notion I have not seen often or that I consider as neglected in automated trading strategies. Automation is forcing us to consider everything as numbers: how many of this or that, what level is this or that, what are the averages of this or that, always numbers and numbers.

If you want to express some kind of sentiment or opinion, it has to be translated into some numbers. Your program does not answer with: I think, ..., you should take that course of action. It simply computes the data it is given and takes action accordingly based on what it was programmed to do. Nothing more, but also nothing less. It is a machine, a program. You are the strategy designer, and your program will do what you tell it to do.

All this to lead to the notion that your stock selection process should be consistent with your trading methods.

For instance, if you design a trend-following system, then you should select trending stocks and not mean-reversing ones which would tend to be counterproductive to your set objectives. Trend-following goes for the continuation of the price move whereas mean-reversing goes in the opposite direction. Therefore, your stock selection method should aim to capture stocks having demonstrated this trend-following ability over its past. Otherwise, you do have a serious stock selection problem.

And if you cannot distinguish trending stocks from those mean-reverting, then you are in even more trouble. You would be playing a game where you are making bets without following the very nature of your strategy design. All because your stock selection process was not consistent with your trading procedures. If your trading strategy cannot identify mean-reversing stocks then why play a mean-reversing gig?

Stock Portfolio Strategy Design

Every day you are told which stocks performed best and which did not. Your stock trading program can monitor all those stocks and sort them out over whatever criteria you like. One would imagine that your stock ranking system would make you concentrate on the very best performers. However, when looking at long-term portfolio results, it should raise more than some doubts on this matter since most professional stock portfolio managers do not even outperform market averages.

Comparing stock trading strategies should be made over comparables, meaning over the same duration using the same amount as initial capital. It should answer the question, is strategy \( H_k \) better than \( H_{spy} \) or not?

\(\quad \quad \sum (H_k \cdot \Delta P) >, \dots, > \sum (H_{spy} \cdot \Delta P) >, \dots, > \sum (H_z \cdot \Delta P)\) ?

So, why is it that most long-term active trading strategies fail to beat the averages? Already, we expect half of those strategies should perform below averages almost by definition. But, why are there so many more?

\(\quad \quad \displaystyle{ \sum (\overline{H_k} \cdot \Delta P) < \sum (H_{spy} \cdot \Delta P)}\quad \) for \(k=1 \) to some gazillion strategies out there.

It is a legitimate question since you will be faced with the same problem going forward. Will you be able to outperform the averages? Are you looking for this ultimate strategy \( H_{k=5654458935527819358914774892147856} \)? Or will this number require up to some 75 more digits...

We have no way of knowing how many trading strategies or which ones can or will surpass the average benchmark over the long term. But, we do know that over the past, some 75\( \% \), maybe even more, have not exceeded long-term market averages. This leaves some 25\( \% \) or less that have satisfied the condition of outperforming the averages. It is from that lot that we should learn what to do to improve on our own trading strategies.

We can still look at the strategies that failed in order not to follow in their footsteps. Imitating strategies that underperform over the long term is not the best starting point. It can only lead to underperforming the averages even more.

We need to study and learn from the higher class of trading strategies and know why they outperformed. If we cannot understand the rationale behind such trading strategies or if none is given, then how could we ever duplicate their performance or even enhance them?

We have this big blob of price data, the recorded price matrix \( \mathsf{P} \) for all the listed stocks. We can reduce it to a desirable portfolio size by selecting as many columns (stocks) and rows (days) as we like, need or want. Each price is totally described by its place in the price matrix \( p_{d,j} \). And what you want to do is find common grounds in all this data that might show some predictive abilities.

You have stock prices going up or down, they usually do not maintain the same value very long. So, you are faced with a game where at any one time prices are basically moving up or down. And all you have to determine is which way they are going. How hard could that be?

From the long-term outcome of professional stock portfolio managers, it does appear to be more difficult than it seems.

There is Randomness in There

If price predictability is low, all by itself, it would easily explain the fact that most professionals do not outperform the averages over the long term. As a direct consequence, there should be a lot of randomness in price movements. And if, or since, it is the case, then most results would tend to some expected mean which is the long-term market average return.

It is easy to demonstrate the near 50/50 odds of having up and down price movements. Simply count the ups and downs days over an extended period of time over a reasonable sample. The expectation is that you will get, on average, something like 51/49 or 52/48 depending on the chosen sample. The chart below does illustrate this clearly.

With those numbers, we have to accept that there is a lot of randomness in the making of those price series. It takes 100 trades to be ahead 2 or 4 trades respectively. With 1000 trades, you should be ahead by 20 or 40 trades. But, you will have to execute those 1000 trades to achieve those results. That is a 2\( \% \) or a 4\( \% \) of trades taken that will account for most of your generated profits. This says that the top 50 trades out of the 1000 taken will be responsible for most if not all your profits. And that 950 trades out of those 1000 could have been throwaways. Certainly, the 48% of trades you lost (480 trades), if they could have been scraped would definitely have helped your cause, profitwise.

The problem you encounter is that you do not know which one is which, and thus, the notion of a high degree of randomness. Fortunately, it is only a high degree of randomness and not something that is totally random because there only luck could make you win the game.

Here is an interesting AAPL chart snippet (taken from my 2012 Presentation). It makes that presentation something like a 7-year walk-forward with totally out-of-sample data. The hit rate on that one is very high. It is also the kind of chart we do not see on Quantopian. It was done to answer the question: are the trades executed at reasonable places in the price cycles? A simple look at the chart can answer that question.

The chart displays the strategy's trading behavior with its distributed buys (blue arrows) and sells (red arrows) as the price swings up and down. On most swings, some shares are sold near tops and bought near bottoms. The chart is not displayed as a probabilistic technique, but to show some other properties.

One, there was no prediction made in handling the trades, none whatsoever. The program does not know what a top or bottom is or even has a notion of mean-reversal. Nonetheless, it trades as if it knew something and does make trading profits.

Second, entries and exits were performed according to the outcome of time-biased random functions. There are no factors here, no fundamental data, and no technical indicators. It operates on price alone. It does, however, have the notion of delayed gratification. An exit could be delayed following some other random functions giving a trade a time-measured probabilistic exit. Meaning that a trade could have exceeded its exit criteria but its exit could still be ignored until a later date for no other reason than it was not its lucky exit day.

Third, trades were distributed over time in an entry or exit averaging process. The mindset here is to average out the average price near swing tops or bottoms. The program does not know where the tops or bottoms are but nonetheless its trade positioning process will make it have an average price near those swing tops and bottoms.

Forth, the whole strategy goes on the premise of: accumulate shares over the long term and trade over the process (this is DEVX03 that gradually morphed over the years into its latest iteration DEVX10). The above chart depicts the trading process but does not show the accumulation process itself even if it is there. To accumulate shares requires that as time progresses, the stock inventory increases by some measure as prices rise.

Here, the proceeds of all sales, including all the profits, are reinvested in buying more shares going forward. And this share accumulation, as well as the accumulated trading profits, will be reflected in the strategy's overall long-term CAGR performance. It is all explained in the above-cited 2012 presentation.

Trading Methodology

The trading methodology itself accounts for everything. It is the method of play that determined how to make the strategy more productive. Just looking at the chart, we have to consider that there was a lot of day to day randomness in those price swings. Yet, without predicting, without technical or fundamental indicators, the strategy managed to prosper over its 5.8-year simulation (1,500 trading days).

Since that 2012 presentation, AAPL has quadrupled in price and all along the strategy would have accumulated even more shares and evidently would have profited from its trading operations even more. Whereas the AMZN example would have seen its price go from 176.27 to over 1,800 today. All the while accumulating more and more shares as the shares went up in price. The strategy profited from the rise in price with a rising inventory and profited from all the trading activity.

The strategy is based on old methods and does show that it can outperform the averages: \(\sum (H_k \cdot \Delta P) \gg \sum (H_{spy} \cdot \Delta P) \). The major force behind strategy \( H_k \) is time. As simple as that. It waits for its trade profit. It was easy to determine some seven years ago that AAPL and AMZN would prosper going forward. We can say the same thing today for the years to come. What that program will do is continue to accumulate shares for the long term and trade over the process, and thereby continue to outperform the averages.

Time is a critical factor in a trading strategy. For instance, to the question: if you waited to exit a trade, would you make a profit? To illustrate this, I made the chart below where the red lines would have shown that having picked any of the 427 days shown you would have had a losing trade. On the other side, a green line showed that picking that trading day out of the 427 days, it would have ended with a profit. As can be seen, all 427 trading days could have ended with a profit. Moreover, you could have had multiple attempts at making a profit during the trading interval. Simply picking any one day would have resulted in profits just for having waited for that profit. Nothing fancy needed for this except giving the trade some time.

In the end, we all have to make choices. Some are easier than others. But one thing is sure, it will all be in your trading strategy \( H_k \) and what you designed it to do.

Do do the best you can since the above does say it can be done.

Stopping Times

This notion is rarely discussed in Quantopian forums, yet, it can have some dramatic effects when carried out in an automated stock trading strategy.

A stopping time in a stochastic process is when a predetermined value or limit is reached for the first time. Say you buy some stock and set a one-sigma exit from your entry price. That will be a random-like stopping time. In a timed-out exit, you would know when it would take place. Whereas when using a price target, for instance, you would not know when but you would know at what price the target would be reached. The exit will be executed using either method at the time or price specified and signal the end of the ongoing trade.

In automated trading, the first stopping time should be considered a little short-sighted, and at times a lot. You are interested in the profit generated by your target move. And, getting out of your trade before reaching that target should be viewed as underscoring since your program would break its stopping time even though you would have profited from that early exit. But, that, in general, is less productive than another alternative. The real problem I see is that most often the first stopping time is not the last.

Your trading program should be looking for the last stopping time, or at least, try getting closer to it. You must have noticed that when you set a price target and it is hit, the price, on average, keeps on going up, but without you. This says that your price target, even if it made you money, was not the best price target. It could have been higher since most of the time your original price target will be exceeded, and all you would have had to do was to wait some more.

Exceed the First Stopping Time

What could be done to improve performance would be to move your original price target up as the stock price is approaching it. It goes on the premise that there will be more than just one higher price above your target. In fact, you might have a series of higher highs following your exit. But, if you were not there, there is no way you could profit from them.

In a lot of trading strategies on Quantopian, it is what I see. The equivalent of exiting on the first stopping time reached. And even more often below it. The below target thing is done as a side effect to the trading methodology used, the same as for the first stopping time.

We can express a price series as follows: \(p(t) = p_0 + \sum_1^T \Delta p \,\) where \(\sum_1^T \Delta p \,\) is the sum of all price variations after its initial price \(p_0\) up to termination time \(T\). The expression could be used for any trade \(n\): \(\;p_n(t) \cdot q_{0,\,n} = q_{0,\,n} \cdot (p_{0,\,n} + \sum_1^t \Delta p_n) \) where \(t\) is somewhere between entry and exit \(0 < t \le T \). And the stopping time is the exit of that n\(^{th}\) trade.

What we are trying to predict is: \(\Delta p = p_{t + \tau} - p_t \) representing a time-limited segment of the price series for a particular trade. Here \(\tau\) represents a random time variable of undetermined length. Depending on how we slice and dice a price series, it is by adding all the pieces that we get our final result. In a timed-out exit, \(\tau\) will have a fixed value (say a fixed number of days), whereas, in a price target scenario, \(\tau\) will be a quasi-random variable of still undetermined length.

We could see trades as having a first hitting time at \(t_0 \) following whatever code triggered its entry. And some time later, a stopping time \(t_0 + \tau \) as we exit the trade.

A Critical Component

This makes it a critical component of any trading system. But, and this is a serious but, we are less than good at determining those hitting times and stopping times, especially if we trade in bulk like a hundred or more stocks at a time.

Answer the question: who many times in your last 1000 trades have you exited a trade at its highest (long) or lowest (short) price with a profit? If you did this exercise, you might find out that only a small number of trades would have qualified out of that 1000. And since you know that you will be wrong most of the time, why not push those hitting and stopping times further out of reach in order to profit even more from your trading decisions?

Some wonder, apparently not many, how could such high CAGR as presented in the first post be possible? The answer is relatively simple. It was by pushing on those self-imposed hitting and stopping time delimiters. Moving price targets higher as prices got closer. And at the same time, increasing the number of trades while raising the average net profit per trade. Also, two critical numbers in any trading strategy.

The task is especially difficult if you choose an optimizer like CVXOPT to do the trading since its main task is to flatten out volatility, seek a compromise for its set of weights, and thereby, often accepting involuntarily the below price target figures. To counterbalance this, I forced the optimizer to accept my weighing scheme which allowed share accumulation financed by continuously reinvesting trading profits in order to raise the overall CAGR.

The Need to Innovate

As trading strategy designers, it is our job to be innovative and force our strategies to do more, even if it requires strategy reengineering to do so. In the above-cited strategy, I pushed even further by adding delayed gratification. This pushed the equation toward: \(\Delta p = p_{t + \tau +\kappa} - p_t \) where \(\,\kappa\) represent some added time beyond the first stopping time. It means that even though your trade qualified for an exit at the first stopping time, the exit is delayed further with the average expectation that \(\Delta p\) will be larger and thereby generate more profits for that trade. Notice that the shorts in that scenario were also profitable where they were mostly used for protection and capital preservation.

This is the same technique I used in my DEVX8 program, which was long-only, to raise the average net profit per trade. And raising the average net profit per trade while increasing the number of round_trip trades will result in a higher payoff matrix. Which was my goal from the start. It made \(\kappa\) quite valuable.

Again, we are all faced with choices. It is up to us to design the best trading strategies we can.

For those wishing to learn more about stopping times, hitting times follow the links.

To go even further, look up Riemann sums, Itô calculus, and the Newton-Cotes formulas.

What you will find is that those formulas date, way back to the 1850s and '60s. Even Bachelier in his 1900 thesis (La Théorie de la Spéculation) used formulas from the 1870s. Since Bachelier's thesis we have that the variance is proportional to the square-root of time \(\, \propto \sqrt{t}\). Therefore, we should expect that the longer we hold some shares, the higher the variance will be. Over the short term, this might not show up that much, but over the long term, it definitely will.

One mathematician I admire a lot is Kiyosi Itô for his work. This does not derail the remarkable contributions of his predecessors or successors. We all build on all their shoulders anyway.

The extension of these time series formulas to applications in finance is not new either. Nor is the payoff matrix that I often use in my posts \(\displaystyle{\sum_1^n (H \cdot \Delta P) } \). It is even embedded in the fundamental theorem of calculus, and that dates way back.

It is how we use these formulas that we can extend their applications to push the limits of what was considered as barriers for many many years. Things like the efficient frontier which in my research appears at most as a line in the sand. Where you can simply jump over it since it was a self-imposed theoretical quadratic upper limitation to portfolio management.

It is not by figuring out the partial sum of the parts (factors, weights or whatever) having an upper limit that you will exceed this limit. If 4 or 5 factors are sufficient to explain 95\(\%\) of a price series, it is all it can do. It is the same for the efficient portfolio on the efficient frontier. If that is the target, you can aim for it, but you will not exceed it.

What you have to do is change the portfolio management mechanics, take what you find useful in all those theories and readymade formulas and make them do more.

The \( \sigma \)-algebra

If you design trading strategies in a \( \sigma \)-algebra space: \( \mathcal {N}(\mu , \sigma^2) \), meaning using averages and standard deviations for data analysis and trading decisions. Then, all you will see will be confined within that space. It implies that you will be dealing most often with a normal distribution of something of your own making. This allows setting up stochastic processes with defined properties. Things like mean-reversion and quasi-martingales. But it also reduces the value of outliers in the equation. They will have been smoothed out over the chosen look-back period. It will also ignore any pre-look-back period data for the simple reason that none of it will be taken into account.

There is a big problem with this point of view when applied to stock prices as these do not quite stay within those boundaries. The data itself is more complicated and quite often moves out of the confines of that self-made box. For example, a Paretian distribution (which would better represent stock prices) will have fat tails (outliers) which can more than distort this \( \sigma \)-algebra.

Stock prices, just as any derivative information from them, are not that neat! So, why would we treat them as if they were? The probability density function of a normal distribution has been known for quite some time:

\(\quad \quad \displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}\)

but does it really describe what we see in stock prices where \(\mu\) is in itself a stochastic process, and \(\sigma\) is another stochastic process scaling a Wiener process \(\sigma dW\) of its own in an environment where skewness, kurtosis or fat tails are prevalent. It is like wanting to model, with everything else, some "black swans" which by their very nature are rare events, 10 to 20+ \(\sigma\)s away from their mean \(\mu\). Consider something like the "Flash crash" of May 2010, for instance. There were price moves there that should not have happened in 200 million years, and yet, there they were. Those are things you do not see coming and for which you are not prepared in the sense that your program might not have been designed to handle such situations. Knowing some \(\mu\) and \(\sigma\) does not give predictability to tomorrow's price.

Some will simply remove the outliers from their datasets. Thereby building an unrealistic data environment where variance (\(\sigma^2\)) will be more subdued and produce smoother backtest equity curves until the real world "black swan" comes knocking, and it will. This will screw up all those aligned \(\mu\)s and \(\sigma\)s. On the other hand, leaving all the outliers in will result in averaging up all the non-outliers giving a false sense of their real values, and again give a distorted image of the very thing they are analyzing.

Randomness

Due to the quasi-randomness features of stock price movements, the sense of their forward probabilities will also be distorted due to the stochastic nature of those same \(\mu\)s and \(\sigma\)s. And since this stochastic process is still looking at a stochastically scaled Wiener process, you enter into the realm of uncertain predictability. It is like playing heads or tails with a randomly biased coin. You thought the probability was 0.50 and made your bets accordingly, but it was not, the probability was randomly distorted allowing longer winning and losing streaks of larger magnitude (fat tails).

You do some \(\mu\)s and \(\sigma\)s over a selected dataset, say some 200 stocks and get some numbers. But those numbers only applied to that particular dataset, and over that particular look-back period, not necessarily its future. Therefore, the numbers you obtained might not be that representative going forward. Yet, even if you know this, why do you still use those numbers to make predictions as to what is coming next and have any kind of confidence in those expected probabilities?

The structure of the data itself is made to help you win the game. And if you do not see the inner workings of these huge seemingly random-like data matrices, how will you be able to design trading systems on fat-tailed quasi-martingales or semi-martingales structures?

There is no single factor that will be universal for all stocks. Period. That you keep searching might be just a waste of time. If there ever was one, it has been arbitrage out a long time ago, even prior to the computer age. Why is it that studies show that results get worse when you go beyond linear regressions? Or that adding factors beyond 5 or 6 does not seem to improve future results that much? The real questions come in with: why keep on doing it if it does not work that good? Are we suppose to limit ourselves to the point of not even beating the expected returns of long-term index funds?

The Game

The game itself should help you beat the game. But you need to know how the data is organized and what you can do with it. To know that, you need to know what the data is, and there is a lot of it. It is not just the price matrix \(P\) that you have to deal with, it is also all the information related to it. And that information matrix \(\mathcal {I}\) is a much larger matrix. It includes all the information you can gather about all the stocks in your \(P\) matrix. So, if you analyze some 30 factors in your score ranking composite, you get an information matrix \(\mathcal {I}\) that is 30 times the size of \(P\).

But, it is not all. Having the information is only part of the game. You now need to interpret all that data, make decisions on what is valuable and make projections on where that data is leading you, even within these randomly changing biases and expectations. All this makes even larger matrices to analyze.

You are also faced with the problem that all those data interpretations need to be quantifiable by conditionals and equations. Our programs do not have partial or scalable opinions, feelings or prejudices. They just execute the code they were given and oftentimes to the 12\(^{th}\) decimal digit. This in itself should raise other problems. When you are at the 12\(^{th}\) decimal digit to differentiate the z-scores, those last digits start to be close to random numbers. And the ranking you assign to those stocks also starts to exhibit rank positioning randomness which should affect their portfolio weights and reverberate in your overall results.

Another problem is the periodic rebalancing on those numbers. Say your portfolio has 200 stocks or more and when it rebalanced some 50 stocks were liquidated for whatever reason and replaced by 50 new ones. All the portfolio weights have changed. It is not only the new 50 stocks that have new weights, but it is also all the 200 stocks that see their weights moved up or down even if there might have been no need or reason to do so. The stock selection method itself is forcing the churning of the account. And there are monetary consequences to be paid on this.

If we ignore the randomness in price series, it does not make it go away! If we ignore outliers, they do not disappear, they will just eat your lunch, that you like it or not. It is why, I think, there is a need to strategize your trading strategy to make it do more even in the face of unreliable uncertainty.

If the inner working of your trading strategies do not address these issues, do you think they go away?

A "Markowitz" Attempt

This is a peculiar trading strategy. The original author probably wanted it to be based on some Markowitz portfolio management principle, but it is not. Nonetheless, over part of its trading interval, it does make as much money doing nothing as it does trading.

I have not found why over some time intervals it does not trade at all. For instance, prior to 2007 no trades, and just after October 2017, again no trades. I limited the simulations from 2009/01/02 to 2018/09/21, even if it exceeded the October 2017 no trading period.

Whatever the stock inventory was after October 2017, we could say the strategy went into hibernation or in a Buy & Hold scenario until termination time. From the charts below, it turned out to be quite profitable.

I usually get interested in trading strategies that can withstand 10 years and more. This one with 9 years barely qualifies. However, I was interested because its optimizer was the same as the one used in the first chart in this thread.

I added many changes to the original design giving it a totally different long-term outlook. Applying pressure where I thought it would add value. Thereby, changing the nature of the betting system implemented. I increased the number of stocks to 28. And since the strategy was scalable, all trading profits were reinvested as the strategy progressed in time. I even used some leverage.

I have not as yet added protective measures. They should come next if I find the time. I consider the strategy as still in its development stage.

Here are the chart, first the cumulative returns:

Some of the metrics:

And summary stats:

The strategy made 2,968 trades and manage to take its \(\$\)10M initial stake up to \(\$\)1.1B, a 63.8\(\%\) CAGR over the 9 years.

I should be able to add protective measures that should increase overall performance as well as reduce drawdown. It is a pity I do not know why it stops trading in October 2007. However, if it was some added feature, base on the results, I would be ready to keep it on the condition that it also works on other stock selection methods.

Prior to 2018, the strategy seemed ready to break down. Yet, letting it do whatever it did which was nothing at all, it proved a lot more lucrative.

My research is forcing looking into scalability more and more and on a grand scale. I tried \(\$\)100M on the same strategy as in the previous post, but the tearsheet analysis would not complete. Reduced initial cap to \(\$\)50M and got the following stats.

Doing the simulation revealed some behavioral patterns I do not like. I will have to address these issues should I want to continue trying to improve on this strategy. On the other hand, I might find the behavior as some desirable added feature, but I doubt that.

A trading strategy needs to be scalable upward, meaning support more capital. The strategy turned its \(\$\)50M initial capital into almost \(\approx \$\)5B over those 9 years. Not bad considering. It did scale up some 90\(\%\) when it should have been closer to 100\(\%\).

Another test on the same strategy as above. I returned to the \(\$\)10M initial capital scenario. I wanted to program to finish and the tearsheet analysis to complete as well.

The strategy uses the CVXOPT optimizer. And those using optimizers to handle the trading activity know that they are not that kind to whatever trading strategy you might have. The optimizer simply does its job, whatever its input.

For the chart above, none of the optimizer sections or routines were modified. Nonetheless, changing some of the pressure points forced it to generate more, about 3 times more as the previous \(\$\)10M scenario.

I requested the strategy to consider and accept more variance, knowing that the direct impact would be an increase in the average net profit per trade.

This simulation says that even if you use an optimizer to control trading activity, pressure points outside its purview can still have quite an impact on overall performance.

The stats chart below show the higher average net profit per trade:

And some of the portfolio metrics came out at:

Even without the protective measures in, the underwater chart does not seem that horrible. Nonetheless, protective measures should be put in to better control drawdowns. At least, it would reduce the size of some of those down spikes.

The cumulative return chart is quite interesting. There is a tremendous rise in the last section of the chart. This happened when the optimizer stopped trading as can be seen in the first chart. This is the same behavior as in a prior simulation, so it does not come as a surprise.

My point here is that even if you have a strategy where the trading activity is controlled by an optimizer, it remains that that is not all your trading strategy can do. Even within those constraints, it can do better.

The Age of the Mega Fund

My latest book available on Amazon: The Future Belongs To The Builders Of Mega Funds is on the construction of - you guessed it - mega funds.

We have entered this age of super corporations, those with valuations exceeding 1 trillion dollars. The future will bring many more of these online. In their wake, it will create super conglomerates, super banks, and super investment funds the size of which has never been seen before.

Nonetheless, this will be part of a natural evolution of things. A side effect of globalization. Large businesses are no longer just regional or national, they have global reach and can have millions and millions of customers. This changes investment opportunities considerably not only in size but also in diversity.

You want to change the world, then you can start at the top of the food chain by building these mega investment funds.