Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
How to accept/reject alpha factors?

Is there an objective, step-by-step method to accept/reject alpha factors?

With a bit of research and head-scratching, and by scouring the Q forums, tutorials, and help documentation, one can collect a relatively large number of alpha factors (1). Then one is faced with the task of evaluating the factors, sorting into The Good, the Bad, and the Ugly. The existing tools I'm aware of are:

Some possible accept/reject criteria:

  • p-values below 0.05 with a mean IC of above 0.01
  • Over-fitting tests
  • Q risk factor (sector and style) exposures
  • Alpha time scale (e.g. max IC delay)
  • Q fund exposure (presently, no way to measure)
  • Size of universe over which factor is effective
  • Sensitivity to specific time/day of week trading
  • "Uniqueness" of alpha factor
  • Unequal weighting of factor values

So, the question is how to perform the first-stage good/bad/ugly sort? 'Good' factors would be kept, and move on to be combined with other factors, in the alpha combination step, in one or more algos. 'Bad' factors would be rejected, and put on the compost heap. 'Ugly' factors would be scrutinized further, just to make sure one isn't throwing the baby out with the bath water.

As a side note, factors optimally trade on different time scales, but this would seem to be part of engineering the alpha combination, and not a consideration in the first-cut to decide if a factor has merit.

"An alpha is an expression, applied to the cross-section of your universe of stocks, which returns a vector of real numbers where these values are predictive of the relative magnitude of future returns" per https://www.quantopian.com/posts/a-professional-quant-equity-workflow.

119 responses

Good question, I’d be interested as well. Basically I’ve relied on p-values below 0.05 with a mean IC of above 0.01. And Thomas’ odd/even quarters for training/testing factors to avoid overfitting.

Thanks Joakim -

How do you handle the Q risk factor exposures (sector and style)?

Morningstar has some 900 factors one can download for any set of stock price series. For all of them, we can apply any type of smoothing or filtering over any lookback period of our choice that fits within the time series themselves. A way of saying we can also force all that data to say what we want it to say.

Which 4, 5, or 6 of those factors will be relevant going forward?

With 6 factors, you could, by extensive testing, find which set, out of the 7.25 ∙ 10^14 possible combinations, might be worthwhile over some long-term past dataset.

The question would be: will it give you an advantage going forward? Will you have time to do all those tests?

We assume the data has been filtered, cleaned, rendered accurate as much as possible, and timely delivered. We never question their accuracy and almost take them for granted when we know that a lot of that data has been pasteurized before appearing in any company's books. Also, even the as-of-date of that data does not say that you would have had access to such data at that time. Still, you can use it for testing purposes.

The point being made is that even if you have some factors, whichever set you choose, it only represents your own data filtering mechanism on top of what the factors might provide. A smoothed EPS series with some lookback period is a different time series than its original as-of-date version. The same goes for any of the series. Once you manipulate, in some way, any of the Morningstar data series, you technically get a new set of factors which could hardly be called predictive due to their own delayed lookback periods.

The relevance of some of that data is to be questioned also. Quarterly data require longer lookback periods to make any of them significant. 100 quarters is still 25 years of quarterly data. Ten quarters is not significant enough to count as some reliable forecasting tool. Especially, if that smoothed data is already out of whack by half a quarter.

Guy -

I agree that the amount of data is important. For fundamentals data, I kinda had the same intuition that one needs many decades of data over many "business cycles" to firm up conclusions. It seems like Q might not have enough data, but I'm no expert. I guess the idea is to pull in other sources of information and/or rely on one's own professional/industry experience, versus just relying on Q data sets?

Stock A might be tracking alpha A perfectly and then switch to alpha B while stock B is completely all about alpha C.

Blue -

Not sure I follow. How would one apply the concept to evaluating an alpha factor?

@Grant,

As long as the ‘specific Returns’ are close to Total Returns, and also as long as the risk factors are within the bounds of the competition, I don’t worry about them too much. I used to constrain them very tightly in the Optimize API (e.g. in my old PARTY algo), but I think you lose alpha that way and might also be prone to overfitting, so I don’t do that anymore. Sometimes I instead use Thomas’ Orthogonalize Alpha function if I want to squeeze out only specific Returns.

On second thought, that's not completely true. I do worry about them and look at those risk exposures quite a bit. Ideally I'd like them to 'naturally' be as low as possible, without constraining or orthogonalize them. I don't really know how to do that though, other than finding unique alpha. If you know, or find out, I'd be very interested to know.

@Grant, when considering dataset, we have to make choices.

When designing or modifying strategies, I start with the theoretical big picture. The where it will all end with a long-term perspective. And that such a strategy might have to make its own place among others, not only by outperforming the market in general but also by outperforming its peers.

I first look at it from a mathematical point of view. If I can put an equal sign somewhere, I consider it a lot better than an opinion or some guesses. An equal sign is very cruel: it answers only to yes or no. You made enough profits or you did not. As simple as that.

You can consider the whole stock market as this big price matrix P. Each column representing a stock over the duration of the portfolio. Each EOD entry \( (p_{d, j}) \) viewed as recorded history. We cannot change any of it. It is part of this big blob of market data.

For simulation purposes, and other considerations, we only take a subset of this huge price matrix on which we intend to trade. The selection process could be about anything. You will only barely scratch the surface of possibilities anyway.

One thing is sure, you cannot test the humongous set of available scenarios. You have to make a choice on whatever criteria you might find reasonable. Even something based on common sense will do.

Whatever that selection may be, it will be unique. In the order of 1 in 10^400+ possibilities. So, the point is: make a selection and live with it. It is important, but it is not what is the most important in this game. Time is. Will your trading strategy last or blowup?

The next thing that is important is the trading strategy H, as in Σ(H∙ΔP), your strategy's payoff matrix. Since the price difference matrix ΔP is also historical data, or going forward, never seen data that eventually will become history, all you have at your disposal to control anything is your trading strategy H. It is the same size as your price matrix subset and holds the value of your ongoing inventory in every position that you have taken, or will take, over the life of the portfolio.

This makes H (your strategy), the most important part of the equation. It is trade and time agnostic. And this too has its own set of implications.

You could add an information and a decision matrix to all this. It would result in something like Σ(H∙I∙D∙ΔP). But that would not change the price matrix (P). It would only qualify the reasons for the volume changes in the stock inventory. But already the H matrix has the result of all the trade decision-making H = B - S. Technically, it is the only thing of interest for the relatively short-term trader. What governs your buying and selling matrices?

I look at the trade mechanics to answer that problem. I want the mechanics of the trade to be independent of the stock selection process. This way, I could change the stock selection and still find profits. It is like designing an intricate maze (using trading procedures) where the stock price will hit some of the boundaries and trigger trades. I try to find in the math of the problem the solutions to better returns.

However, I do know that if my trading strategy, for whatever reason, could not survive or generate more than the averages over some extended period of time over some historical data, then it has no value going forward. It ends up in the “remember this one and do not do that again bin”.

As much as long-term investors develop a long-term investment philosophy, the short to medium-term traders have to also develop a trader's philosophy. An understanding of what is being done during the whole trading process. And this can be driven by equations.

@ Joakim -

I never did quite understand the Q thinking in imposing the specific (idiosyncratic) returns versus common returns jazz, and now that they've gone over to a "signal combination" model for the Q fund, the requirement to have each algo with broad sector diversification and low style risk exposures makes even less sense, in my mind. But as the expression goes, "You can't fight city hall."

Here's a definition:

Specific Return
The return on an asset or investment over and above the expected return that cannot be explained by common factors. That is, the specific return is the return coming from the asset or investment's own merits, rather than the merits common to other, similar assets or investments. It is also called the idiosyncratic return.

What this implies is that common factors, which by definition are well-known, are predictive and can generate returns (better than just chance). This would say that the market at a gross level is not so efficient at all. This makes no sense. The alpha associated with common factors would have decayed a long time ago (if it ever existed in the first place). By definition, a factor is predictive or it is just noise. I suspect that common factors are really common noise.

If you look at A Professional Quant Equity Workflow, the risk exposure management doesn't get applied until the portfolio construction step. So, over-constraining risk exposure of the individual alpha factors may be self-defeating, in the sense that with enough diverse factors, the net risk exposure may be low enough, upon combination. My intuition is that there are dynamic diversification effects at play, and so if each alpha factor is orthogonalized with respect to the risk factors over some look-back period, any benefit of dynamic effects will be lost by applying the risk management too early in the workflow. There is a reason it is applied in the alpha combination step.

So, maybe for each factor to be evaluated, there is an "accretion test"--when it is combined with other factors and then the risk exposure constraint is imposed, does it add or subtract from the performance (with a test for over-fitting applied, as well)?

The shift to signal combination raises a valid question as to when to apply risk exposure management, at individual signal level or at signal combination (portfolio) level? At the individual signal level, assuming the alpha factor/s are generated using the broad base universe of QTU without constraining for anything would be what I call "raw signals" and anything with a Q Sharpe of 1 and above is likely a good candidate. These "raw signals" inherently accounts for risk mitigation through asset diversification just by the mere evaluation of the QTU universe regardless of industry or sector. In short, I consider the factor performance across the broad base universe devoid of any other constraints as to style, risks and neutralities. The factor/s performance is measured by one metric, Q Sharpe (returns/risks). I would then gather the raw signals that pass this treshold and apply leverage, dollar neutral, position concentration and turnover constraints to the signal combination. You may notice that I left out beta and risk model (sector and common returns) constraints as I deem them unnecessary because they should have been inherently negated by mere diversification of QTU universe. Just my two cents.

@ James -

Yeah, it is not clear if the switch to a "signal combination" approach at the Q fund level means anything for writing algos. The contest rules were not changed, and so presumably they still represent general requirements for an algo (although I have been told that more niche strategies are considered, as well).

I would think that the "signal combination" approach, with each algo effectively an alpha factor in the architecture, to be assigned a weight and compensated proportionately, could greatly expand the participation rate of the users. This way, for example, one could write an alpha factor that trades a single industry sector or niche market, where one might have some expertise to bring to the table.

I gather that the Q fund alpha combination step may intercept the raw alpha vector input to MaximizeAlpha (or TargetWeights) anyway, so the Optimize API constraints of a given algo may not matter (at least that's how I would consider approaching the "signal combination" at the fund level, versus using the output of order_optimal_portfolio). So getting worked up over the Optimize API constraints probably doesn't make any sense; it all gets blended raw into the final combined alpha vector, and then constrained. This would be consistent with the architecture of such a fund operated under the "signal combination" paradigm, versus a "fund of funds" approach.

Quantopian's alpha generation problem is this:

Σ(H∙ΔP) = ω_a ∙ Σ(H_a∙ΔP) + , … , + ω_k ∙ Σ(H_k∙ΔP) + , … , + ω_z ∙ Σ(H_z∙ΔP)

They are trying to find the weighing factors ω that would maximize Σ(H∙ΔP). The problem is somewhat simplified since each of those trading strategies will be considered as some alpha source. However, each of those strategies does not generate the same amount of profits and therefore should be ordered by decreasing relevance. Which would imply that the highest producing strategy Σ(H_a∙ΔP) receives the highest weight ω_a and the highest allocation. Treating each strategy as equal is technically nonsensical. Strategies should battle to stay relevant and above a predetermined alpha threshold. This should read as many try but few are selected.

This does not change the author's compensation method, he/she receive 10% of the NET profits generated by ω_a ∙ Σ(H_a∙ΔP). Note that, Quantopian intends to leverage these trading strategies. It was not said anywhere if the author's strategy would be compensated accordingly lev_a ∙ ω_a ∙ Σ(H_a∙ΔP) ∙ 10%. I view the word NET as net of all trading expenses.

The advantage for Quantopian to consider strategies as alpha-signals is one of volatility reduction by diversification and outcome control. Simply by dynamical changing the weights ω_i(t) on the trading strategies, they can control what they want to see and treat each strategy as if a simple alpha-signal, a simple factor in their portfolio equation.

Guy -

The payout to each algo is its fraction of the total allocation (its weight) times the total net profit of the entire fund (not sure about leverage) times 10%. So the payout could be more, less or equal to the algo’s share of the total net profit. It all depends on how the algo weight is set and the net fund profit. The actual algo return doesn’t drive its payout, as I understand.

@Grant, we are saying the same thing. The sum of weights is Σω_i = 1. However, Quantopian could treat any of the trading strategies it considers for its fund as either overweight or underweight with varying degrees of leverage.

Quantopian in their business update said they had 25 authors and 40 allocated strategies. One author has the equivalent of 5 allocations ($50M). His or her strategy is more heavily weighted and should grab more of the net profits compared to others. The share of net profits will be ω_a ∙ Σ(H_a∙ΔP) / Σ(H∙ΔP). Which is what the equation said.

Guy -

Your equation ω_a ∙ Σ(H_a∙ΔP) / Σ(H∙ΔP) says that the payout depends on the profit of the algo, right? This is not the case. The algo could make $0 actual profit (Σ(H_a∙ΔP) = 0), but still get a payout if its weight is not zero.

@Grant, whatever the weight, if Σ(H_a∙ΔP) = 0, then ω_a ∙ Σ(H_a∙ΔP) = 0.

@ Guy - have a look at https://www.quantopian.com/get-funded. The relative allocation (weight) of capital determines the share of the overall net fund profit but the forward return of an algo doesn’t determine the payout to the author (although one would expect the trailing returns to impact the algos weight in the fund).

@Grant, would you pay anything as compensation for a strategy's participation in the Quantopian portfolio if all it generates in profits is Σ(H_a∙ΔP) <= 0. I do not think Quantopian would pay anything either for non-performing strategies. Otherwise, all the poorer performing strategies would drain the potential rewards of the best-performing ones.

@ Guy - With an extreme case, I was just illustrating the fact that the payout scheme has changed to:

Royalty = (weight of algorithm in signal combination) * (total net profit of the combination)

Your math and comments seemed to be inconsistent with this new formula, and so I wanted to make sure we were all on the same page. My interpretation of the change is that Q may be combining the raw alpha vectors from all of the algos, versus first running each through the Optimize API, and then combining the orders. If this is the case, the idea of individually traded strategies is not relevant. Based on what I can glean, the alphas from each algo are combined as a simple linear combination, since, per the Get Funded page "Each author’s share of that pool is proportional to their allocation or weight within the broader Quantopian investment strategy." On Quantopian Business Update, however, the description is a more general "The weight of your signal will be based on the quality of the alpha" and there is no reference to the weight being equivalent to the relative allocation of the algo in the fund. The linear combination of alpha vectors implied by equivalency of the weights and allocations seems over-constrained. I would think that a more general ML approach, using feature importance for the weights would be better, followed by application of the Optimize API for risk management. This would be more along the lines of weighting based on the "quality of the alpha" (i.e. the importance of the alpha in forecasting) without the constraint of a simple linear combination of alphas in the alpha combination step.

Back to the main topic of this thread ("How to accept/reject alpha factors?"), reportedly Q is working on some way of evaluating correlation to algos already in the fund. This might be the first cut in whether to accept/reject an alpha factor. Similar to the style risk factors, there's no point in submitting something already in the Q fund (although perhaps there's an advantage to diversifying the same strategy across multiple authors, assuming that the expenses associated with acquiring and maintaining an author are negligible...it also could have an amplification effect, since engaging more authors with real money will tend to attract more users via zero-cost peer-to-peer marketing..."Hey, I made $1000 on Q. You might want to give it a try, too.").

@Grant, we are saying the same thing. Let's take Q's formula:

Royalty = (weight of algorithm in signal combination) * (total net profit of the combination)

weight of algorithm in signal combination = Σ(H_a∙ΔP) / Σ(H∙ΔP) = ω_a

total net profit of the combination = Σ(H∙ΔP)

or:

Royalty = [(ω_a) ∙ (Σ(H_a∙ΔP) / Σ(H∙ΔP)] * Σ(H∙ΔP) = ω_a ∙ Σ(H_a∙ΔP)

and your 10% royalty remains proportional to what your trading strategy (ω_a) ∙ (Σ(H_a∙ΔP) is producing. You produced more than some other trading strategy, you should get more in royalties. Just as in the contest leaderboard.

Should you first want to add the outcome of all the strategies and divide that by the number of allocated strategies, this would give:

Royalty share = Σ [ω_a ∙ Σ(H_a∙ΔP) + , … , + ω_k ∙ Σ(H_k∙ΔP) + , … , + ω_z ∙ Σ(H_z∙ΔP)] / i

which would penalize the higher performers to the benefit of the low-performing strategies. Low-performing strategies would find that scheme quite acceptable. In a way free money.

Nonetheless, Q should provide its “mathematical” attribution formula to clear things up.

@Grant, back to topic. Let's start with: there is no universal stand-alone factor able to explain the gigantic ball of variance that we see in stock market prices. There are gazillions of factor combinations we could test over past market data and it still would not give us which combination would best prevail going forward.

The only thing you can do is compare one factor to another within this ocean of variance where a lot of randomness also prevails. However, whichever set of factors you want to select, they will form a unique combination that will have to deal with the portfolio's selected stocks. You change the stock selection method and the strategy will behave differently giving different results: Σ(H_a∙ΔP) > , … , > ω_k ∙ Σ(H_k∙ΔP) > , … , > ω_z ∙ Σ(H_z∙ΔP)

You are always trying to find a better strategy than Σ(H_a∙ΔP) in order to get better results. You can also add a whole set of constraints in order to control the general behavior of the strategy as in the contest rules for instance. Or you can tweak it to death to really show that past results are no guarantee of future performance. But it will not change the mission, you want the better performing one nonetheless.

The preoccupation might not be looking for factors per se, but for trading methods you can control. Otherwise, you are at the mercy of your code, of your selection process, and your short-term trading philosophy. Not to mention the math of the game.

Guy -

your 10% royalty remains proportional to what your trading strategy (ω_a) ∙ (Σ(H_a∙ΔP) is producing. You produced more than some other trading strategy, you should get more in royalties. Just as in the contest leaderboard.

You statement is incorrect. The royalty is the algo weight times the net profit of the entire fund. What the algo produces is not relevant (although its historical returns must factor into computing the algo weight in the "signal combination" algorithm). I'm not sure what else to say...have a look at these two links again (and contact Q if it is still not clear):

https://www.quantopian.com/get-funded
https://www.quantopian.com/posts/quantopian-business-update

Grant

Is there an objective, step-by-step method to accept/reject alpha factors?

I would do the following (but all of it is not supported by Q at this time, I guess)
1. Simple regression of alpha factor on forward returns (Alphalens metrics - p value/IC etc.). If significant, proceed to Step 2
2. Control for Q factors. Throw in the Q factor returns (multiple linear regression) and check if alpha factor is significant (This is not yet supported by Alphalens). If it's significant, proceed to step 3
3. Control for existing alpha factors. Throw in other alpha factors (another multiple linear regression) and check if additional alpha factor is significant). This will not only help in finding the significance of additional alpha factor but will help in finding the appropriate linear combination of all alpha factors.
4. Control for other participants total alpha factor (this can be controlled by Q internally). However, at this time Q only gets to see only participant's trades/positions and not alpha factor so this may not be possible in the current framework.

I don't know if it's possible to implement Step 2 as historical Q factor returns are not exposed. The methodology is shared however so it can be created externally.

I edited my original post above to include a link to Thomas W.'s orthogonalize function, since it seems like it could be handy (Joakim had already pointed it out, but I figured it would help to promote it to the top level of this discussion thread).

It raises an interesting point, since on the Get Funded page it says clearly "We are building a portfolio of uncorrelated investments." In practice, what does this mean for evaluating alpha factors and algos? In the extreme, it may mean that only the uncorrelated returns of an algo (the specific/indiosyncratic versus common) are of interest, where anything that is a published risk factor (sector and style) and already represented in the fund would be considered 'common.' Basically, one has to orthogonalize with respect to the risk factors and the fund to determine if there's any specific/indiosyncratic alpha of interest to the fund.

Any guidance on how to use the time scale of the factor in determining if it should be accepted? For example, here Thomas W. points out "Here is a new iteration of this tearsheet. Instead of cumulative IC it now just displays daily IC which makes it easier to see which horizon the signal is predictive for." So what is one to do with this information, as a first-cut in evaluating a factor? Is there an optimal range for the peak in the IC versus delay, for example?

A way to evaluate the value of factors is with relative performance. A trading strategy has structure. It is usually quite simple, a single do while loop (do while not finished, if this then trade in this manner). And whatever stock trading strategy, it can be compared to a benchmark like SPY. Σ(H_a∙ΔP) > Σ(H_(spy)∙ΔP) ?

If all that is different between two strategies is the use of one factor, then comparing those two strategies would be as easy as Σ(H_a∙ΔP) > Σ(H_b∙ΔP) > Σ(H_(spy)∙ΔP) ? All other things being equal.

However, if you change the stock selection method, the number of stocks to trade, the time horizon, the leverage, the rebalance timing, or the available initial capital, the picture might change drastically. Saying that the factor difference in question might have been good at times and bad at others. As if the factor had some relevance in the stock selection _a and not so much in _b just because a different set of stocks were considered. And depending on the stock selection method, there could be gazillions of gazillions of possibilities.

This might render the value of an additional factor almost irrelevant as we increase the number of factors treated. Especially if the factor considered is not part of the primary factors: _f1 > _f2 > , … , > _f4 > , … , > _fn > since these factor values decay as you increase their number as illustrated in the following:

It would require more extensive testing, meaning a much greater number of stock selection methods just to partially ascertain the value of a single factor. As if trying to give statistical significance to a distant factor that might be more dependent on the fumes of variance than anything else.

Is there a need to go into that kind of exhaustive search beyond a few primary factors when you have no certainty that a more distant factor might prevail going forward? And are the few factors considered that relevant if they have little predictive powers? The more distant the factor, the less predictive is should be since its weight relative to the others is diminishing. These factors will remain in order of significance, no matter the set of factors we use.

@ Guy -

I think one needs a very high Sharpe ratio to be able to have much confidence whatsoever using even 10-20 years of look-back. I haven't seen an example in awhile, but the Bayesian cone thingy that is used on some of the Q plots illustrates this point. It gets broad in a hurry, and then it's anybody's guess, unless the Sharpe ratio is really high (and if it is, it probably is due to some form of bias/over-fitting).

@Grant, on the Sharpe thing, not necessarily. Normally, I would tend to agree, but.

For instance, from my modifications to your clustering scenario, (see: https://www.quantopian.com/posts/alpha-combination-via-clustering#5ceda432e54ae8431d338b37), my average Sharpe was 1.81. And in my modified robot advisor script scenario (see: https://www.quantopian.com/posts/built-robo-advisor#5cc0ab30c2bf4d0be07243fd), the average Sharpe was 1.85. The average Sharpe ratio could be considered as almost equal and not that high considering. Nonetheless, the difference in performance between these two strategies was extremely high. They used quite different trading methods even if both used an optimizer to settle trades.

In the former case, all is done to reduce volatility and drawdown while squashing the beta to near zero. In the latter scenario, the strategy is seeking volatility and trades in order to increase performance. Evidently, the price was a higher beta (0.61 - 0.65) even though the selected stocks were all high beta stocks (>1.0) and should have averaged at greater than 1.0. But, that was not the case. The differences in performance are due to the trading methodology used and their long-term objectives. In essence, the result of the game they respectively played.

The Bayesian cones we display in the tearsheets have to be expanding due to recorded portfolio volatility and their increasing variance over time. These cones will increase for any trading scenario. They will grow wider and wider the more time you give them. We have a hard time predicting tomorrow, and there we go extrapolating a year or two ahead. What would you expect except a widening cone since “uncertainty” of the estimates are certainly rising?

Why is it that anyone generating higher performance levels is considered as over-fitting? Can't we generate some higher performance simply because we game the game differently?

For example, I plan for my game to grow with time. I usually concentrate on increasing the number of trades and structuring the game so that the average net profit per trade also rises with time. Should I not be able to do that? And if not, why not?

My sense is that chasing performance by ranking alpha factors and picking the top 5 or so will lead to a lot more volatility than incorporating all factors that have predictive power on an individual basis, regardless of relative risk-adjusted returns. Joakim summarizes one recipe:

Basically I’ve relied on p-values below 0.05 with a mean IC of above 0.01. And Thomas’ odd/even quarters for training/testing factors to avoid overfitting.

This still leaves the task of combining the accepted alpha factors, which could include some performance-based weighting (alpha combination is not the main topic of this thread). However, it would need to incorporate the statistics of very limited trailing data sets, relative to economic time scales. There is a statistical significance to X > Y; if X & Y have relatively large error bars, then one might as well say X = Y (all other things being equal).

Why is it that anyone generating higher performance levels is considered as over-fitting?

The question is really how much out-of-sample data is required to determine that something other than luck is the mechanism for higher performance? For example, have a look at https://6meridian.com/2017/11/could-the-current-sharpe-ratio-for-the-sp-500-be-a-signal-of-things-to-come. It's a plot of the rolling 12-month Sharpe ratio (SR) for the S&P 500 over the last 25 years. Given the underlying volatility in the SR, how much out-of-sample data would be required to prove that a strategy does better than the market on a risk-adjusted basis, and that skill was the reason? There is such a thing as skill, but my read is that proving it might take a lifetime.

@ Joakim -

And Thomas’ odd/even quarters for training/testing factors to avoid overfitting.

Is there an example? And what are the recommended pass/fail criteria?

It is not intuitive that somehow comparing odd/even quarters would be able to sniff out over-fitting, once the factor has been written. For example, if the author has inadvertently, or intentionally applied look-ahead bias, then the bias will likely be the same for any given quarter. The only way to detect the bias, it would seem, would be to wait for out-of-sample data and then try to detect a change (which is why the contest is 6 months...which I guess if there is something grossly funky going on might be enough to detect over-fitting...but only if the factor time scale is short relative to 6 months, and the signal-to-noise ratio is sufficiently high, which is not likely to be the case for an individual factor).

I continue to be confused by the new signal combination approach to constructing the Q fund. I'm pretty confident that Q should be incentivizing individual factors, not multi-factor algos, if they want to do the best job in combining the factors. This would say that each factor should get its own algo, which would allow Q to capture the "born on date" to determine how the factor has performed over 6 months or more out-of-sample. If this is correct, there's probably a way to construct a "factor algo" that allows for evaluating the individual alpha vector, without all of the irrelevant risk control and trading gobbledygook.

@Grant,

I'm pretty confident that Q should be incentivizing individual factors, not multi-factor algos, if they want to do the best job in combining the factors.

I think that your confusion is related to your above assumption. About 95% of fund algorithms are sourced from contest results and the rest from Q's automated screening processes. That said, most if not all these submissions are multi-factor combinations which are treated as an individual "signal" in the signal combination of the Q fund. Each of this individual signal have its unique tradable universe within the broad based QTU universe, position concentration, time to start trade execution, etc. It is also targeted that these individual signals are uncorrelated to each other. So there is this layer that takes all these uncorrelated individual signals, post process them (i.e. net out positions of same stocks) and somehow apply a weighing scheme (perhaps proprietary to Q) that meets the desired trading strategy after factoring in the associated risk mitigating schemes.

@Grant, you say:

There is such a thing as skill, but my read is that proving it might
take a lifetime.

I agree.

That kind of study has been done. It turns out it would take some 38 years for a professional money manager to show skill prevailed over luck at the 95% level based on sufficient data (10 years and more). No one is waiting or forward-testing for that long. And even if they did, they would again be faced with the right edge of their portfolio chart: uncertainty, all over again.

In all, it would be a monumental waste of time, opportunities, and resources. It is partly why we tend to do all those backtests just to demonstrate to ourselves that over some past data our strategies succeeded in some way to outperform or not. Evidently, we will throw away those strategies that had little or no value whatsoever. If a trading strategy cannot demonstrate it could have survived over extended periods of time, why should we even consider them in our arsenal of trading strategies going forward?

Often, in the trading strategies I look at, the impact of the high degree of randomness in market prices is practically totally ignored. As if people are trying to look at the market price matrix P as if a database of numbers from which they can gather or extract some statistical significance of some kind, either at the market or stock level. And from there, trying to make some sense of all those numbers using all types of analysis methods.

Only under a high degree of randomness can any of the following methods have their moments of coincidental predominance, enough to make us think they might have something kind of predictive. That it be a sentiment indicator trying to show the wisdom of crowds, or machine learning, deep learning or artificial intelligence, they will all have their what if moments. That you use technical indicators, parameters, factors, residuals, principal component analysis, wavelets, multiple regressions, quadratic functions, and more, it might not help either in deciphering a game approaching a heads or tails type of game. Upcoming odds will change all the time and even show somewhat unpredictable momentary biases.

Nonetheless, it is within this quasi-random trading environment that we have to design our trading strategies. And design it in such a way as to not only provide positive results and outperform market averages, but also, at the same time, outperform our peers, not just over the immediate momentum thingy, but ultimately over the long term.

The end game is what really matters. Can our trading strategies get there? That is the real question. What kind of game can we design within the game that will allow us to outperform market averages and our peers? What kind of trading rules should we implement in order to do so? This goes back full circle to what our trading strategy H does, to how we manage our stock inventory, and what will be the outcome of our forward-looking payoff matrix:

Σ(H_(ours)∙ΔP) > Σ(H_(peers)∙ΔP) > Σ(H_(spy)∙ΔP) > Σ(H_(others)∙ΔP)?

@ James -

My main hypothesis is that rather than incentivizing multi-factor algos via the contest it would be better for Q to handle the alpha combination step at the fund level across all factors. Otherwise Q hasn’t really changed much with their switch to a signal combination approach. My bet is that there is some performance degradation by not accessing all of the factors; the pre-combination at the algo level may be sub-optimal.

@Grant, I think @James expressed it correctly. Q cannot use a single factor from within a trading strategy. It would require that they know which factor is within a particular trading strategy and its impact. That, in turn, would require that Q sees the code. It would go against the prime directive that one's code is protected from prying eyes. If Q can see your code at their own discretion, they do not need you in a future allocation picture or in the contest winning circle.

What Q can do, is take the outcome of a strategy as a whole, look at its trading order output as a single signal or trade vector it can mix with other signals from other strategies. As @James noted, this might give Q the ability to pre-add or pre-remove redundancies and cross-current trades. Thereby, going for the net trade impact for a group of strategies with the effect of slightly reducing overall commissions and increasing trade efficiency.

However, even doing this requires a 3-dimensional array: Σ(H∙ΔP), with strategy, stock, and price as respective axes. So that Q's job becomes weighing each strategy's contribution to the whole, under whatever principles they like, as in ω_i ∙ Σ(H∙ΔP)_i where ω_i is the weight (i = 1 to k) attributed to strategy i. The problem becomes even more complicated since due to market orders overlapping, strategies will continue, nonetheless, to act as if the actual trades would have been taken, and thereby distorts the future outcome of the affected strategies as well as Q's mix.

Also, they cannot use that many trading strategies in their mix since the number of stocks appearing in more than one strategy with increase with the number of strategies considered. This would have for consequence to increase the redundancy problem and the distortion impact even further the more they add strategies to their signal mix.

@ Guy -

All I'm saying is that at the fund level, Q should follow the workflow outlined by Jonathan Larkin on:

https://www.quantopian.com/posts/a-professional-quant-equity-workflow

I could be wrong, but I think they'd be better off taking in individual alpha factors, versus having authors chug through the entire workflow, combining lots of alpha factors in an attempt to smooth out returns, manage risk, etc. My impression is that it's not what a hedge fund would do typically; each narrowly focused alpha would feed into the fund global alpha combination, as Jonathan has shown. Again, my intuition could be off, but incentivizing authors to do the entire workflow, basically constructing a super-algo, is the wrong approach.

I guess if I were tasked with doing signal combination, I'd want the individual signals (i.e. the alpha factors), versus having them pre-combined. For example, would it be better to have access to all ETFs, or just a handful of them? The number of degrees of freedom for the former is much higher.

@Grant, the problem here is the individual's IP.

In a hedge fund, they can mix and combine alpha signals any which way they want, at the portfolio and signal level. They have access to it all, it is their code after all. For Q, they are not “allowed” to look inside a trading strategy otherwise it blows their fiduciary assured trust. If we ever see they looked inside our trading strategies, the limited trust we might have might simply fly away.

You ask: For example, would it be better to have access to all ETFs, or just a handful of them? Due to the potential liquidity problems, it will turn out to be just the most active ETFs that should be of interest. You still need someone on the other side to take the trade.

@ Guy -

To evaluate the quality of an individual alpha factor, it is not necessary to review the code or even to know its “strategic intent.” It can be a black box (the same as an algo that combines multiple factors). Either the factor predicts the future across some universe of stocks or it doesn’t.

I think the liquidity problem goes away at the fund level once factors are combined. One isn’t trading the factors individually and independently.

@Grant,

@ Joakim -

And Thomas’ odd/even quarters for training/testing factors to avoid
overfitting.

Is there an example? And what are the recommended pass/fail criteria?

Sorry I missed this earlier. Here's the link to Thomas' post that includes the notebook for researching and cross-validating alpha factors over odd/even quarters. I find it quite useful for avoiding overfitting, though I'd prefer to use 'random' quarters instead to minimize any risk of fitting on seasonal trends. Up to you of course but personally I would include this one in your Alpha Research Toolkit.

I do also agree with your comment that future live data is the best for testing/measuring overfitting. This one is pretty good though I think when initially researching and developing alpha factors and we only have access to past data. Live paper trading, in my opinion, is more part of the final test before live trading with real money.

Using this notebook is not nearly as alluring (and addictive) as datamining in the backtester (which I'm still a victim of sometimes unfortunately), but I reckon it's a lot better for finding robust factors that are general enough to work reasonably well on future data. Assuming one starts off with some economic or market behaviour rationale first, and not just randomly trying stuff to see what works. Again, easier said than done. Datamining, because it works so well (on past data), tends to boost my ego and increase my blind-spots... :(

Thanks Joakim -

I updated my original post above with a link to Thomas' over-fit testing tool.

Size of universe over which factor is effective

Just a suggestion but maybe also include ‘type’ of universe as well? Some factors may only be predictive on small/large caps, high/low volatility or high/low beta stocks, only certain sectors, etc.

Some factors may also be a better short indicators, and others long? They may not all be symmetrical in other words. One challenge might be how to combine all these different factors on different universes into a ‘meta-factor’ that the Optimizer can use? Perhaps by using multiple pipelines and combine them outside, e.g. in Rebalance?

Thanks Joakim -

Yes, there are many "flavors" of factors that would then need to be combined. One issue is the potential for over-fitting goes up with the number of degrees of freedom (e.g. one could concoct a long-only factor and then apply it to the best-performing sector in recent times, and presto--we have a winner!).

Perhaps for a given factor, there's an optimal long/short tilt (without going totally long or short)?

One issue that I don't ever recall being tackled on Quantopian is financialization. At a gross level, governments and the financial sector are in cahoots and probably exert a dominant effect on individual stocks and the markets that may make this whole factor research thing moot. I wonder if there are companies/industries that tend to be more immune to the whims of finance, and actually make money the old fashioned way--by steadily providing goods and services to their customers that are more valuable than what their competition can provide.

Yes, good point. Overfitting I think is a real risk as the complexity of the factor model goes up. I find getting the balance right quite difficult and sometimes I have to remind myself to just follow the ‘KISS’ principle. The hunt for real alpha ain’t easy!

@Grant, you are looking for factors that might be almost irrelevant going forward due to the very nature of what you are trying to observe.

Some years back, someone in a study came up with the highest coincidental correlated factor he found to the DJI index. It turned out to be the price of turnips at a local market in Mumbai. Now, even if I know this, I would still not bet my shirt on this one for it to continue being highly correlated in the future.

In the '70s, studies showed that we had the length of dresses correlated to the DJI index, and the sale of aspirin inversely correlated. And again, going forward, I would not bet that these kinds of trends might have or would have continued. Moreover, if we redid those studies today, we would find that they, in fact, did not. Note that I have not done those studies, do not intend on doing them and am not interested in ever doing them. But, you do not need to do them to ascertain that you would not rely on such things to build a stock portfolio.

Above, I added another consideration:

Sensitivity to specific time/day of week trading

There is some guidance by Thomas W. on https://www.quantopian.com/posts/an-updated-method-to-analyze-alpha-factors :

If your algorithm is sensitive to trading times, it's indicative of short-term alpha or some noise you're trying to overfit to. My advice is to set to always trade as close to the close as possible and never change it.

...we are now focused on your EOD holdings, and evaluating your algorithm using those (using the tearsheet I posted).

So is the standard tool for evaluating both factors and backtests now the notebook Thomas W. posted (and not Alphalens or something else)?

I'm also confused...if trades are to be entered at EOD, what is the point of having a slow, minutely backtester? Why not revert back to the daily backtester? Overall, my sense is that Q wants slowly varying daily alpha factors, that they can combine at the fund level--the minutely backtester (and even running backtests) would seem to be overkill, right? If I understand correctly, they just need alpha factors for the Q fund.

If your algorithm is sensitive to trading times, it's indicative of short-term alpha or some noise you're trying to overfit to.

This isn't exactly clear to me. Why is it unlikely that an alpha factor has latched onto some structural time-of-day inefficiency in the market? e.g. at open there is a consistent overreaction to some risk that sorts itself out by the end of each day as the market digests information. Seems very plausible to me. I'd like to see some evidence that algorithms that are sensitive to time-of-day scheduling consistently perform worse OOS.

what is the point of having a slow, minutely backtester?

Presumably produces somewhat more accurate fill simulation than using EOD prices would. However, yes, it seems like if Quantopian wants us to find alpha factors where it doesn't matter if we get into a position a day or two or three after the signal flashes, minutely zipline doesn't seem the ideal tool for the task.

Viridian Hawk -

Thanks. Yeah, I'm not sure I follow the guidance, and it is buried in Thomas' thread, but it would seem like a pretty important consideration. I'll send a question in directly to Q support.

To my original post, I added a link to Thomas W.'s How to Get an Allocation in 2019.

Some excerpts:

Set your trades to execute 1-2 hours before the close: since we evaluate your algorithm purely based on its EOD holding, it does not matter how you got into your portfolio at the close.

Remember that we only see your final EOD holdings, not your actual factor scores. Try to have your final portfolio be the most accurate representation of the original factor. To achieve this, you should use the optimizer as little as possible and not worry too much about exposures, especially if specific returns look good. Code-wise, you should not use MaximizeAlpha and instead TargetWeights. This is a good place to start: order_optimal_portfolio(opt.TargetWeights(weights), constraints=[]).

Sigh...

As far as I can tell, Q now wants individual daily alpha factors to plug into the workflow described on https://www.quantopian.com/posts/a-professional-quant-equity-workflow. I seems there is no point in thinking about alpha combination anymore, and the backtester is pointless, too. And the primary (sole?) tool for evaluation is the workbook posted on https://www.quantopian.com/posts/an-updated-method-to-analyze-alpha-factors. At least things are getting simpler (but in retrospect, it sure feels like we all wasted a lot of time and VC investor money on this Q project...).

Lately it occurs to me what a long, strange trip it's been.

Love the Grateful Dead reference hahaha!

There’s some good guidance in that post I reckon. You can still combine alpha factors, into a combined factor, and use the backtester with no trading costs set, to try to develop more robust (and hopefully somewhat novel) factors. Easy to overfit in the backtester though I think.

Grant, all I can say to you is keep on Trucking, man! You are one of most brilliant ideas man and coder in this forum, in my opinion. Change is good and hopefully you adapt to it. Wasted time is relative. Keep on Trucking!

@ Joakim -

Thomas doesn't address the alpha combination question directly, but my bet is that the Q fund team would be better off with individual, narrowly-focused factors, leaving the general alpha combination step to them. Of course, if there is some economic rationale behind the alpha combination, then it's really one factor (e.g. alpha_1 and alpha_2 are combined, based on an economic relationship between them, versus just summing their z-scores, for example). One problem is that an single alpha really needs to be evaluated in the context of the alphas already in the fund (or being considered for the fund). Any given alpha will do a lot zigging and zagging, but still have value to the fund. I don't yet see a path for authors to get feedback in this regard.

@ James -

Thanks for the supportive words! We'll see where the changes lead. This move is in the right direction. The idea of having "signed authors" needs to be looked at, in my opinion. Thomas says "the contest continues to be our best resource for finding authors to fund; we look closely at the authors who enter the contest, given the skill and effort it demands." Is Q in the business of collecting authors, or collecting alpha factors? Focus on the latter, without effectively hiring a temporary virtual workforce (presumably under NDA, non-compete, etc.), and just harvest the alphas. But maybe I'm missing something...

I'm trying to follow the recent Q guidance on how to determine if one has a good alpha factor/algo. As far as I can tell, the dust is still settling on the new signal combination approach to the Q fund (announced publicly here: https://www.quantopian.com/posts/quantopian-business-update).

As Thomas W. mentions on https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019:

We then select multiple factors from the community, potentially do some transformations on them (e.g., to manage turnover), and then combine them together into a single factor. We then plug this combined factor into our optimizer to form our final portfolio. This approach gives us more flexibility and significantly lowers the requirements on individual strategies, allowing us to license new strategies faster and in greater numbers than before.

My read is that authors should focus on submitting individual alpha factors versus full-up multi-factor algos (per the architecture https://www.quantopian.com/posts/a-professional-quant-equity-workflow). Q has machinery in place to manage algos not alpha factors, so to write an "alpha factor algo" I think one does this:

  • "Trade" every day (e.g. schedule_function(rebalance, date_rules.every_day(), time_rules.market_close(hours=1))
  • Don't use the optimizer (e.g. order_optimal_portfolio(opt.TargetWeights(weights), constraints=[]))

Then, it's a matter of plugging the backtest results into the most recent version of the notebook here:

https://www.quantopian.com/posts/an-updated-method-to-analyze-alpha-factors

If I'm following correctly, this new notebook is the only tool the Q fund team is using to evaluate algos, and that we are to use it, too? This is where I get kinda confused, since presumably the factor/algo still needs to pass the contest requirements, but I guess they are in the process of being loosened?

Also, the new notebook (it really needs a name...) just outputs a bunch of plots (example attached). Solely by eye-balling the plots, one is supposed to be able to accept/reject a factor (as the Q fund team presumably is doing...seems pretty qualitative, and not at all scalable...at one point, Q was aspiring to 1M users). I'm very confused by this recent guidance from Q. It should be easy peasy lemon squeezy to accept/reject a factor. Whatever the steps, the output is binary (there's no "kinda-sorta" variable type in Python)--either the factor is worth evaluating with other factors (either already in the fund, or to be added to the fund), or not. So, the idea of just staring at plots and making an unsystematic judgement doesn't make sense. There should be a binary output (accept/reject) derived from a quantitative process. If accept, then attempt combination with other factors in Q fund.

Another thing that has me confused is the last half of the last sentence in Thomas' statement:

Use proper hold-out testing: overfitting is still the biggest risk you can run into and you should be paranoid about it. Something easy to do is to never evaluate your factor on the last 1 to 2 years until you are absolutely happy with it, and then only test it once on the hold-out period. If it fails, you should become very suspicious if you overfit somewhere in your process or if perhaps your factor favors certain market regimes.

All good, except for the last part: "...if perhaps your factor favors certain market regimes." Isn't the whole point to have uncorrelated factors, some zigging and some zagging as market regimes change? This is very confusing, since diversification requires that some factors perform well, while some perform poorly, at any given time. So, if the task at hand now is to find individual alpha factors that can be combined with ones from other authors (and perhaps ones contributed by the Q fund team and maybe some "common" factors, too), trying to get monotonic goodness versus time for a given factor would seem to be counter-productive. In fact, if I understand the new compensation scheme correctly, one could be paid for a factor that has negative net returns (over the short term) as a stand-alone investment, but nevertheless provides a net diversification benefit to the fund (since it is the weight of the factor times the return of the entire fund that determines the factor payout to its author). Favoring/disfavoring certain market regimes would seem to be a good thing, under the new signal combination paradigm.

I get the impression that Quantopian is interested in alpha factors/signals that operate on the time scale of about 5-15 days. This would seem to rule out any slowly varying factors based exclusively on fundamentals, which would yield signals on a quarterly or longer time scale. Effectively, one would put a signal through a band pass filter, with a low frequency cut-off at 1/15 (1/day) and a high frequency cut-off at 1/5 (1/day), prior to evaluating it.

Is this correct? I just don't see how company fundamentals which change slowly could be relevant to the Q fund, if it is to be built on signals that vary on a 5-15 day time scale (unless fundamentals are used to pick stocks that are more sensitive to a factor that varies on a shorter time scale).

@Grant, your intuition as to the forecast horizon (time scale of 5-15 days) is probably correct. However, discounting fundamental factors that follow a 63 day cycle (slow moving) from the equation is probably incorrect. When combined with OHLCV , sentiment or other alternative datasets with higher frequency can result in the desired equilibrium. The focus on a decent IR that decays slowly is perhaps reflective of post processing needs as it performs signal combination of individual factor combinations which involves, among other things but not limited, netting EOD holdings of individual signals, factoring in transaction costs, timing of entry/exit trades while mitigating risk exposures suited to the fund's trading style and returns/risks profile.

@ James -

I suppose one should make a distinction between forecast frequency and forecast horizon. For a given stock, fundamental data will change every 63 days, but when there is a change, it could forecast returns for the next 5-15 days. I get the impression, though, that for the fund, Q wants individual factors that have both a forecast frequency and a forecast horizon of 5-15 days (ideally effective across a big chunk of the QTU stock universe). I guess since one is doing a relative ranking/shuffling of stocks, and if the fundamentals are updated asynchronously (not exactly, since they are all on a quarterly cycle), then one can kinda achieve the desired effect with fundamental signals only. But I think one still has a problem with sufficient data (e.g. a five-year backtest only yields 20 forecasts per stock (not much), whereas forecasting every 5 days results in 252 forecasts per stock--over 10X more data to decide if the factor will be well-behaved in the future).

Why couldn't a factor that combines fundamentals with some other faster changing data (technical or alternative) not offer both the desired horizon and frequency? "Value" is such an example -- while fundamentals change quarterly, valuation changes every tick.

@ Viridian Hawk -

Yes, if fundamentals (reported quarterly) are combined in a factor with higher frequency data, then the turnover requirement can be met. However, I figure that for fundamentals only, the minimum turnover requirement would be hard to meet. Say the turnover is 100% over 63 days. This translates to 1.6% turnover per day, which is well below the required 5% turnover per day per https://www.quantopian.com/get-funded.

I often struggle to meet the min turnover requirement for this reason, but I also understand why they include it. Just my opinion but I do think it could be lowered to something like 3-4% average daily turnover. A pure Fundamentals based strategy scoring and holding most stocks in the QTU should be able to meet this minimum in my experience.

All good, except for the last part: "...if perhaps your factor favors certain market regimes." Isn't the whole point to have uncorrelated factors, some zigging and some zagging as market regimes change?

I don’t think they are saying that it’s a bad thing. That’s not how I read it anyway. I think it’s just something to keep in mind when trying to assess if a factor is overfit or not. As you said all factors will act differently and prefer certain market regimes over others.

Say the turnover is 100% over 63 days. This translates to 1.6% turnover per day

If you include turnover caused by stocks slipping in and out of the QTU, you can achieve 3-4% daily turnover as Joakim was alluding to.

I guess my insight here is that any alpha factor that is only updated quarterly is probably a dud in the context of the Q fund. Yes, one might be able to meet the 5% per day turnover requirement (due to the QTU turnover, I guess), but one would only need to rebalance 4 times per year. I don't think the Q fund is looking for factors that only update quarterly. Granted, some "fundamentals" include share price, but then we are talking about a different beast, since they are not factors that accountants in green eye shades put out every three months.

My reasoning here is that there just aren't enough historical data available to not over-fit, since one only has 4 updates per stock per year. On this basis, fundamental factors that only update quarterly should be excluded.

If you include turnover caused by stocks slipping in and out of the QTU

Also, within the context of a hedged portfolio, the active hedging will create some turnover as well.

there just aren't enough historical data available to not over-fit, since one only has 4 updates per stock per year.

Agreed. At least the bar should be set much higher and more cautiously. Four updates per year * >2000 stocks in the QTU = 8000 datapoints per year. 10-year backtest brings that up to 80k data points.

@Grant,

"Purely" fundamental factor/s that is updated only quarterly will probably fail the 5% per day turnover contest requirement. There is a current disconnect between the contest requirements and the shift to signal combination. I'm sure Q team is aware of this and will probably address this in the near future. In the new scheme of things and under the new guidance provided, fundamental factor alphas with below 5% daily turnover may still have its place in the signal combination process. We'll have to wait for confirmation from Q team on this.

@Viridian Hawk,

Also, within the context of a hedged portfolio, the active hedging will create some turnover as well.

Can you please explain or elaborate how active hedging creates additional turnover?

There is a current disconnect between the contest requirements and the shift to signal combination.

I'm kinda confused about the "shift to signal combination" in the guidance I've gotten from the forum and directly from Q support. It was first announced to the masses on May 17, 2019 (see https://www.quantopian.com/posts/quantopian-business-update). The signal combination approach has been used for over a year ("Since starting to convert to the signal combination approach in August of 2018..."). At first, my impression was that the focus would shift to funding individual alpha factors (per https://www.quantopian.com/posts/a-professional-quant-equity-workflow), and thus many more algos would be funded ("This “signal combination” approach allows us to incorporate many more algorithms...). The concept of funding alpha factors individually was echoed recently by Thomas W., "This approach gives us more flexibility and significantly lowers the requirements on individual strategies, allowing us to license new strategies faster and in greater numbers than before" (see https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019). My read is that each alpha factor should get perhaps 10X less capital than the algos in the fund (which are at $5M-$50M). Q hasn't mentioned this, but it goes without saying, if the focus is now on funding individual alpha factors, versus multi-factor algos. Backtesting at $10M doesn't really make sense for individual factors (and commissions and slippage don't matter, either, since we are no longer simulating trading, but rather generating an EOD alpha vector for combination into the Q fund).

I would expect Q to be world experts by now in assessing alpha factors, and the signal combination approach has been in effect for a year. Why don't we have revised requirements and an alpha factor funding frenzy? The latter could be explained by a lack of available capital, but not knowing how to write requirements for evaluating alpha factors is really unacceptable--just write them down.

There's also confusion about whether Q is in the business of hiring quants or licensing alpha factors, which might explain the apparent lack of an alpha factor funding frenzy. The additional constraint of the quant having the right "resume" could be bogging things down.

Or maybe there are relatively few active, capable Q participants?

Hmm?

Can you please explain or elaborate how active hedging creates additional turnover?

Take dollar-neutral hedging, for instance. Say you're long $100m and short $100m. The next day the market drops 5%, leaving you long $95m and short $105m. So you close $5m worth of shorts to add $5m back to the long side in order to arrive again at a dollar-neutral portfolio. Just this one risk constraint has caused 5% portfolio turnover in one day in this example. And it propagates throughout all the risk factors. Exposure to some, such as size or value, won't change significantly during the quarter, whereas exposure to faster-moving factors such as short-term reversion or volatility may change often. Also, even on slower-moving (or totally stable) risk factors, as you enter or exit new positions, weights on existing positions may be shifted in order to maintain sector-neutrality, beta-neutrality, size-neutrality, etc.

It's always interesting that Q has guided us to maintain consistent low risk exposures and consistent turnover rates -- but you cannot have both as volatility fluctuates in markets.

@VH, Thanks for your explanation. At the portfolio construction level, this is likely going to happen. However, at individual algo level where current guidance is running alpha factor on TargetWeights without any constraints over the entire QTU universe, this is less likely to happen. Is this correct?

@Grant this has been a great post and gotten a lot of the community engaged. Kudos!

One clarification that should be made however is that the current fund approach is NOT looking for individual raw factors. "We instead treat the algorithm’s positions as a signal" and the fund then combines these "signals". (https://www.quantopian.com/posts/quantopian-business-update). Part of the confusion perhaps stems from the loose, and varying, definition of a factor. Thomas stated it correctly "we do not view your algorithm as something that emits trades we execute, but rather as a 'factor' ". (https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019). Notice that 'factor' is in quotes. The intention isn't that the individual raw factors are somehow extracted. The algorithm IS the factor.

Much of the discussion in this post has somehow gotten focused on the individual "raw factors". Those are great discussions for aspiring authors in crafting their algos but statements such as "the focus is now on funding individual alpha factors, versus multi-factor algos" or "my read is that authors should focus on submitting individual alpha factors versus full-up multi-factor algos" are not correct. The focus is still on an algo.

Just wanted to make that clarification. Again, great discussion to everyone @here.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Grant,

I empathize with your confusion, I was too for quite a while but am starting to slowly understand the processes. Let's try and sort things out starting with terminology/semantics:

if the focus is now on funding individual alpha factors, versus multi-factor algos

To me individual alpha factor could be = quality + value + growth + mean reversion + moon phases
To others this could be interpreted as a multi-factor algo.
Are you interpreting an individual alpha factor as a single factor, say, just quality or something else?

P.S. Didn't see Dan's post while typing this but it basically echoes what I'm trying to say.

Thanks Dan -

I got the distinct impression that you no longer want authors combining uncorrelated alpha factors of many "flavors" into a full up algorithm, per https://www.quantopian.com/posts/a-professional-quant-equity-workflow, but prefer to do the combination of the alpha factors. Per the guidance on https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019, the concept of simulating trades via an algorithm backtest has been replaced by the hack of using the backesting engine as an individual alpha factor generation engine (yes, there could be multiple alpha factors that are combined within the user algo, but per offline guidance from Thomas W., the combined factors should all be of the same "flavor" e.g. all of the same underlying economic rationale but on different time scales).

I think what you are doing, if I understand correctly, is a step in the right direction, but your roll-out is still very confusing. I would have expected new requirements by now, and a significant reduction in the capital per "algo" (individual alpha factor).

The sooner you lock down a set of new "algo" requirements the better off we all will be, since it takes at least 6 months out-of-sample to get feedback from you (although as far as I know, there's still now way to just submit an algo and get assured feedback...it seems to be a "don't call us, we'll call you" approach).

@ James -

My interpretation is Q wants each "algo" to be an individual alpha factor with a specific economic rationale. If users come up with multiple factors, each with its own economic rationale, then they should be submitted separately. The problem is that the requirements still favor writing multi-factor algorithms, where various economic rationales are brought into play via combination of the multiple factors.

My understanding is Q does not want what you describe:

quality + value + growth + mean reversion + moon phases

The preference would be for you to submit 5 separate algos, each with its unique economic rationale.

If I were the one tasked with doing the Q fund alpha combination (presumably Thomas W. has a hand in it), I'd sure want the alpha factors broken up, and each to do a weighting/ranking all of the stocks in the QTU (unless there was a compelling reason to limit the universe). Then the Q fund alpha factor combination engine can do its job with as much information and computing horsepower as is needed.

@Grant,

I'm not sure if Q ..."wants each "algo" to be an individual alpha factor with a specific economic rationale". But since you have a direct offline access to Thomas Weicki and this was what was conveyed to you, I yield to your interpretation, for now. My interpretation of what Q is looking for is an alpha factor (personally, I'd rather call it signal) that is unique and uncorrelated to what they already have and/or what others already commonly have. Economic rationale or any other rationale is only consequential for explainability. Uniqueness can be achieved in many different ways as in my example above. I purposely added moon phases to other commonly used factors to illustrate that it can perhaps produce a unique factor (probably not in real world). Other ways to produce uniqueness are looking at specific content dataset which Q recently released like Short Volume, Estimates, Robinhood Tracking which can be used specifically or combined with others. Alternative datasets like sentiment data, google trends, weather, etc. are also good sources for uniqueness. At least this is how I read it between the lines.

@Grant,

I’m not sure Q necessarily always know the best way to combine individual factors. I’m sure their way is really good, but maybe you or others in the community can come up with even better ways, e.g using machine learning (e.g what Numerai does)? If so, I would think that might have value as well that they might possibly would want to license?

@ James -

The guidance (I think) is for authors not to cobble together lots of unique alpha factors, but to submit them separately (where presumably, each unique alpha factor would have its own unique "economic rationale"). I don't see how Q is going to accomplish this, though, since the incentives seem to be for authors to do combination to improve Sharpe Ratio. What would be the incentive to submit individual unique factors?

The other thing is that the signal combination compensation model is opaque to me. It all depends on how the signals are being combined. Does the uncorrelated alpha from a given signal get projected out, and then weighted (this would seem to be the way to go)? Or are the alpha factors simply combined via a linear combination (e.g. weighted sum of z-scores), or something else? And then everything gets put through an optimizer and is risk managed, so the weight information is lost? All kinda murky. Q probably won't say a word, since it gets into their proprietary business of fund management.

@Grant,
What I'm taking from this useful discussion is that Q no longer cares about all the backtest data that returns from the line:

bt = get_backtest('5d6867017b54c36109518ded')  
# just the EndOfDay positions one gets from:  
bt.positions  
# or  
bt.pyfolio_positions  
# a single day's output would be:  
bt.pyfolio_positions.iloc[0,:].dropna()

This is the 'factor' they are talking about...just the meat...forget the potatoes AND the recipe!
They can create their own EOD-factor from that exhaust and analyze and recombine it as they see fit.

alan

As I have mentioned previously (see https://www.quantopian.com/posts/how-to-accept-slash-reject-alpha-factors#5d066cb8390c0d00490f5d7e), Quantopian is trying to maximize its payoff matrix: \(\sum (H \cdot \Delta P) \).

Presently, Q has some 25 funded trading strategies: \(\sum (H \cdot \Delta P) = \sum _i^{25} (H_i \cdot \Delta P)_i = \sum _i^{25} (w_i \cdot (H_i \cdot \Delta P)_i) \). Each should be weighted by their respective productivity (performance). If your trading strategy is not producing any money, it should not be in the list, and Q should not be paying for a service it is not receiving or that is detrimental to its well being. Therefore, it is no surprise that there is a contest cutoff point for any trading strategy that is losing money: \(\sum (H_i \cdot \Delta P)_i < 0\). It is thrown away even before it could ever be considered for an allocation. And I suspect that in the NDA there is a clause to that same effect.

This also implies that the weights of the contest survivors are in decreasing order going from top to bottom performer:
\(w_1 > w_2 > , \cdots , w_{25}\). Just as with the strategies performance levels Ʃ (H_1 ● ΔP)_1 > Ʃ (H_2 ● ΔP)_2 >, ... , > Ʃ (H_25 ● ΔP)_25.

The sum of the weights remains equal to 1.0: \(\; \sum _i^{25} w_i = 1\; \) since no leveraging is being used. The weights or "factors" are easy to determine: \(\displaystyle{ \; f_i = w_i = \frac{\sum (H_i \cdot \Delta P)_i}{\sum (H \cdot \Delta P)}} \).

These weights become strategy "factors", allocating funds according to merit, scores or performance level. Whatever set of criteria Q wants to use. Still, for them, it is still the management of their overall payoff matrix: \(\sum (H \cdot \Delta P) \) which is now a 3-dimensional array (strategies, stocks, prices). While the strategy developer is still battling a 2-dimensional array (stocks, prices) based on whatever trading strategy he/she wanted to implement.

The inside or code of a particular trading strategy for Q becomes secondary. However, its outcome, its productivity becomes the focal point. And there, it is sufficient to look at the EOD holding matrix \(H_i\) to determine what was bought or sold to follow in the footsteps of those trading strategies and mirror any taken trades.

This resumes to: allocate the most money to the most productive strategies, and less to the others in order of outcomes and any other desired criteria of interest. It is probably why one of the strategies has a \($\)50 million allocation compared to others that have only \($\)10 million, and some probably less. On the less part, I have not seen any comments from Q that it might be so. However, we do have a comment on the \($\)50 million allocation to a single strategy. I bet it is also the most productive.

Managing a 3-dimensional array offers its own set of problems. Especially when some of the contest restrictions are considered at face value. One of which is the use of QTU. This says that all strategies have to use the same set of stocks for their respective stock selection process. And furthermore, full exposure to QTU is another requirement. This says a lot in terms of EOD holding redundancies.

For example, say that the stock selection process uses the highest capitalization as a ranking criterion. Then, every strategy using it is having the same set of stocks to deal with and in the same order. There is not that much diversity in that. You would still get one since all strategies are not using the same number of stocks to trade, and therefore, their respective list of stocks would be of different length. The longer list would have been including less and less prominent ranking stocks. That is not necessarily a bad thing since it tends to reduce overall portfolio volatility. But, on the other hand, it tends to also reduce average performance levels by the very nature of the ranking mechanism itself.

Nonetheless, the top 100 or so would be the same for about every strategy. Making these strategies variations on the same theme. And all they would be able to extract from those top 100 stocks would depend on the internal strategy factors used and their respective trade mechanics. But then again, the redundancies would be considerable. You could have all 25 strategies dealing with the same top stocks, or a large portion of strategies trading a large portion of the same stocks all on maybe similar or slightly different factor criteria. Again, not that conducive to strategy diversity.

Q's move to treat strategies as "factors" as per the first equation is the reasonable way to go. But, even there, you should not expect high real alpha scores, meaning that \(\sum (H \cdot \Delta P) > \sum (H_{spy} \cdot \Delta P) \) might just be a forward wish and not some coming reality.

@Guy,

Pardon me for saying this but I think your perception and interpretation of Q Fund's portfolio construction through signal combination is way off tangent.

These weights become strategy "factors", allocating funds according to merit, scores or performance level

You've over simplified the complex process of portfolio construction by saying that "...Quantopian is trying to maximize its payoff matrix: ∑(H⋅ΔP)." In a signal combination scheme of portfolio construction, there a lot more considerations to take into account. First among the many, is how to combine the individual signals which are presumably uncorrelated that not only maximizes but also fits the returns/risks profile of Q's intended strategy which does not necessarily mean beating the SPY benchmarks. There are also trade execution considerations like netting EOD holdings of individual signals for trading efficiency and timing. There are also regulatory considerations. So it is not as simple as you put in one equation.

@ Alan -

I recommend having a close look at:

https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019

My interpretation is that the trading backtester is to be hacked and used as a daily alpha factor generation engine, where a set of weights across the QTU is delivered EOD via:

order_optimal_portfolio(opt.TargetWeights(weights), constraints=[])  

As I understand, Q wants the weights EOD; all of the portfolio optimization, risk management and trading simulation jazz is superfluous (as is the slow minutely backtester...what a waste...I wonder who got stuck with the bill for that one? Too bad the money couldn't have been put toward funding more algos...).

@Guy, your strategies must have received the highest allocation then, correct? ;)

@Joakim, none of my trading strategies adhere to the contest rules except for the positive profit generation thing. Unless Q changes its contest rules and selection criteria, my kind of trading strategy will have to wait or find better venues. ¯_(ツ)_/¯

@Grant,
I'm interpreting what @Thomas said differently. I parse:

Our new investment approach is called "factor combination", where we do not view your algorithm as something that emits trades we execute, but rather as a "factor" (i.e. a daily scalar score for every stock in your universe that should be predictive of future returns). While we can't directly observe the underlying factors in your algorithm, we use the end of day (EOD) holdings of your algorithm as an approximation. In this analysis, we completely ignore individual trades and also recompute your returns based on your EOD holdings.

to mean that the only thing that matters is the EOD holdings, which is an approximation of "a daily scalar score for every stock in your universe that should be predictive of future returns". This does not preclude using the minutely backtester to get to your final EOD position. It does preclude trying to use the backtester to trade in-and-out intradaily and just end up with a boatload of returns, yet no stock predictions for returns for the next day.

Finally, I see the

order_optimal_portfolio(opt.TargetWeights(weights), constraints=[])  

guidance less onerously than you do. I believe @Thomas said that it was mostly to prevent the good old black-box (that is not open sourced...to Q's detriment...) "universal-optimizer" from setting all weights to equal weights in a lot of cases....hehehe...unintended consequences.

To me, they are realizing that all the "portfolio optimization, risk management and trading simulation" features in their old contest/fund are too confusing and probably impossible to fuse together. On the other hand...just predict the f***'n future one day at a time, one portfolio at a time and all is well! Of course, you may need the "portfolio optimization, risk management and trading simulation" features to help you with that...Bobby Axelrod didn't say it was going to be easy!

alan

@James, what you are saying was already included. Maybe I did not express it clearly enough. No matter what they do or how they intend to do it, the end result will be "only" trying to maximize their 3-dimensional payoff matrix \(\sum (H \cdot \Delta P) \), this, whatever set of criteria they want to use. And, it is as simple as that.

You could add a whole set of criteria to the equation, such as: \(\sum (H \cdot C_Q \cdot \Delta P) \), giving, for instance, an ordered set of strategies based on some lower volatility measure, higher Sharpe ratios, or whatever other combined set of criteria they might prefer. It would only change the priority settings or their desirability ordering scheme including their allocated weights. Not necessarily improving performance.

It would be like setting some desired criteria as an added controlling scaling "factor" which would tend to change the ordering and prioritizing of strategies within the 3-dimensional portfolio. \(\sum (H \cdot \Delta P) = \sum _i^{25} (c_i \cdot w_i \cdot (H_i \cdot \Delta P)_i) \)

The more restrictions you put on a trading strategy the more you are reducing its trade or profit potential. For instance, at times I see strategies with multiple exit targets: say 1%, 5%, and 10%. While there is nothing wrong in coding such a thing if that is what somebody wants, it remains it has little value except for the 1% target since before any other higher exit can be touched, the 1% would have triggered that exit making the others totally redundant.

Even if Q's objective is to maximize its payoff matrix, it should be said that most of it is wishful thinking. What they will end up with is some performance level that will be below the benchmark, albeit, with a lower volatility measure. There is evidently an opportunity cost for this. I consider it very high when viewed from a long-term perspective within a decaying factor environment and decaying CAGR.

If your trading strategy's average profit target is some 0.07% on some \($\)8,000 average bet size as I have seen recently, it will take quite a number of trades to make it worthwhile, even to sustain a long-term single-digit CAGR.

@ Alan -

To me, they are realizing that all the "portfolio optimization, risk management and trading simulation" features in their old contest/fund are too confusing and probably impossible to fuse together. On the other hand...just predict the f***'n future one day at a time, one portfolio at a time and all is well! Of course, you may need the "portfolio optimization, risk management and trading simulation" features to help you with that...Bobby Axelrod didn't say it was going to be easy!

My understanding is that previously, each algo traded as a standalone strategy. The Q Fund, as such, was a kind of fund-of-funds, with each conforming to the full architecture of a standalone hedge fund, with its tight requirements. So, the backtester/real-money trading platform was just a plug in--run the algos, send the trades, and do the bookkeeping (just like with Interactive Brokers and Robinhood...sigh). Now the need is for daily EOD alpha factors to be combined en masse per the architecture; the combination and portfolio construction steps are handled by Q across all daily EOD alpha factors (output by the hacked minutely trading platform).

In theory, this should connect Q back with its original "crowd-sourced hedge fund" concept that users could contribute all sorts of diverse mini-/micro-/nano-alpha to the fund (long-only/short-only/niche sector/whatever), but we shall see how this plays out. They are still advertising "over 230,000 members" yet have funded maybe 25 members since they opened shop in 2011. Get It Together Q team or we are gonna lose interest.

I updated my original post above to include "uniqueness" of the alpha factor as a consideration (thanks to James Villa introducing the concept to this thread, https://www.quantopian.com/posts/how-to-accept-slash-reject-alpha-factors#5d6be3a1c0838d004787cb00).

One thought is that an alpha factor needs to be unique at a macro level. This, I suppose, is the intent of the Q risk model where common risk factors are controlled, via the sector and style risk factors.

This article seems germane:

http://www.panagora.com/assets/PanAgora-Quant-Meltdown-10-Years-Later.pdf

Grant,

Thanks for sharing this article. A good depiction of how market participants interact with the financial markets as it evolves. I have been a market observer since the late 70s and over the years I have come to view the financial markets as a living ecosystem with an auctioning framework where market participants buy/sell assets, compete amongst themselves, adapt to changing environments, all with the general objective of making a profit vis a vis the risks they're willing to take in a zero sum game. The ecosystem itself does not follow any laws of physics or nature yet empirically exhibits some general patterns much of which are attributable to the market participants' behavior such as fear/greed factors and herding effects. Much of this behavior is manifested by how they measure factors they think have predictive power of future returns (financial modelling) whether fundamentally based, prive/volume based or alternative data based. The herding effect comes in when participants tend to follow whatever works point in time (natural human behavior). The caveat is much of the industry is stuck in the linear world mainly because it is easier to explain within its context thus accepted as "common". Take a fundamental factor such as earnings, a company that exhibits progressive earnings over time will have a tendency for price appreciation. Take a technical factor such a momentum, a stock price that exhibits continued upward appreciation will have a tendency to continue. Industry analysts will continue to revert back to these common measures because it is easily explainable, so it seems. Unfortunately, much of the empirical studies made on the distribution of returns and price evolution points to nonlinearity and nonstationarity properties, not a normal distribution but more like Pareto-Levy distribution with fat tails. So it's like fitting a square peg to a round hole. Not to say it's useless but not the most appropriate tool for measurement. Enter the advent of big data, AI/ML and increased computational power which the industry is slowly and cautiously adapting to in search of uniqueness of methodologies and techniques of measurement and predictive power. Big data paves the way to mine data that is less frequently looked at other than the mostly hacked fundamental/technical indicators. AI/ML techniques offers a differentiated technique that looks at nonlinearity relationships and nonstationarity properties. Increased computational power makes these all possible. And this is perhaps a possible explanation to the anomaly of outliers in the industry such as RenTec which has consistently beat the markets over the long term, its ability to be unique, adapt and evolve within the ecosystem!

@ James -

One thing I realized is that there is a proliferation of tilt/factor/smart beta investments out there (e.g. momentum, which is a Q style risk factor). I'm thinking that there must be some ranking of money explicitly applied to these various popular factors via public investments (e.g. ETFs, mutual funds, etc.). This would give some indication of the crowd/herd behavior for these "common" factors (upon which a contrarian strategy could be developed).

Grant,

A very keen observation. Herding behavior caused by FOMO (Fear Of Missing Out) is perhaps the most common behaviorial response out there. A carefully crafted and timed contrarian strategy should be very effective.

One thought/observation is that the alpha factors/signals sought by Quantopian are analogous to the various "factor ETFs" on the market (and all the other flavors of niche/tilt/smart beta/targeted/etc. investments out there). In the ETF sphere, for example, there are quite a few ETFs that have 10X less capital than the typical lowest allocation to an algo/signal in the Q fund (which is ~$5M, as I understand):

https://etfdb.com/screener/#sort_by=assets&sort_direction=asc

Seems to be something for everyone out there. For example, worried about trade wars? Invest in Innovation Alpha Trade War ETF (TWAR). I'm thinking there should be a Border Wall ETF (WALL), too.

One source of ideas for Q factors would be to review all of the "factor ETFs" to see if any of them might make the cut for the Q fund, and then try to replicate them on Q. Of course, the factors are in the public domain (although perhaps not specifically defined via an algo); maybe that would make them "common" and not of interest to the Q fund (since presumably they could be obtained by investors more cheaply than through the Q fund, or they could suffer from "alpha decay" or they are already represented in the Q fund or the investment portfolio of prospective customers or whatever the rationale of excluding "common" factors).

@ Anthony -

Thanks for the input. Presumably the Q style risk factors are examples of alpha factors based on economic rationale, and they are consistent, in the sense that at any point in time we could show the phenomenon of "one group of market participants consistently paying another group" (Otherwise, they wouldn't be risk factors, since they wouldn't be factors, they'd just be noise over the long term (tens of market/economic cycles?)). So, in the case of the "common" style risk factors defined by Q (momentum, market cap, value, mean reversion, and volatility), what are the corresponding economic hypotheses? And how would one go about showing that they are indeed factors and not just noise?

Also, not being a finance guy, I'm kinda unclear how, for example, if I bought shares of iShares Edge MSCI USA Momentum Factor ETF (MTUM) I would be paid by another group in the market? Obviously, I would not be paid by them directly, but somehow indirectly? What are the mechanics of the transaction? And would I always be paid by them, regardless of the relative performance of MTUM, or only when it out-performs the market (e.g. SPY or VTI)?

Reference: https://www.quantopian.com/risk-model

@ Anthony -

I had a go at implementing Amihud, but found that it was highly correlated with Size. That could be an example where introducing Size as an additional factor makes sense in terms of the original economic hypothesis --- perhaps splitting the QTU into 2 quantiles in the first stage.

Looking at Vanguard U.S. Liquidity Factor ETF (VFLQ) (https://investor.vanguard.com/etf/profile/VFLQ), I see:

The portfolio includes a diverse mix of stocks representing many different market capitalizations (large, mid, and small), market sectors, and industry groups.

I suppose they are basically isolating Liquidity from Size and Sector by setting constraints (or as you suggest, by running the factor across slices of the market separately and then combining). Interestingly, they make a distinction between "market sectors" and "industry" which means what?

I'm rather confused by Thomas W.'s guidance here: https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019. It would seem to suggest that Q no longer wants authors to constrain alpha factors; they'd like to do this at the fund level (presumably after alpha combination, in the Portfolio Construction step, as defined on https://www.quantopian.com/posts/a-professional-quant-equity-workflow). This would say just feed in the unconstrained Liquidity factor, combine it with other factors, do the Portfolio construction, and see if there is an improvement, per some overall figure of merit that would presumably include Sharpe ratio (I guess this would be an iterative process, to determine the N optimal weights in the combination, alpha = w_0*alpha_0 + w_1*alpha_1 + ... + w_N*alpha_N). Then, the author gets paid according to his weight in the combination (even though I don't think there's any way to know how much a given alpha actually contributes to the overall return of the fund after propagating through the optimangler).

So, perhaps the best procedure is first to determine if alpha can be isolated in a Liquidity factor, but then to "submit" an alpha factor for evaluation that has not isolated the Liquidity factor; it is just a raw ranking across the QTU. Everything will come out in the wash once it is put through the Q fund architecture.

Reading between the lines, Q is struggling how to provide feedback to authors on what makes a good alpha factor, since if I'm understanding correctly, they'd like it unconstrained (raw, across the entire QTU), and as a raw factor, not necessarily compliant with the requirements on https://www.quantopian.com/get-funded.

@ Antony - Thanks.

By the way, I figure that more often than not, recently created whiz-bang ETFs based on thin air factors are gonna tend to mean-revert downward in price, due to over-fitting (in some cases, grossly, as we saw for the signals offered by Alpha Vertex on Quantopian).

It'd be really handy if Q had EOD ETF holdings as a data set. I wonder why they don't?

https://www.quantopian.com/docs/data-reference/morningstar_fundamentals#morningstar-industry-code

@ Grant, in regards to sectors & industries I believe this is similar to what you'd need RBICS Foucs .

Fields
The RBICSFocus dataset has the following fields (accessible as BoundColumn attributes):

Sectors:
l2_id (dtype str) - Sector classification code based on business focus.
l2_name (dtype str) - Sector classification name based on business focus.

Industries:
l3_id (dtype str) - Subsector classification code based on business focus.
l3_name (dtype str) - Subsector classification name based on business focus.

As I understand, Quantopian's objectives can all be expressed in broad lines of thought. Their prime objective is to maximize their multi-strategy portfolio(s). I view it as a short-term operation like trying to predict some alpha over the short-term (a few weeks) where long-term visibility is greatly reduced and adopt a "we will see what turns out attitude" with a high probability of some long-term uncertainty.

I would prefer that this portfolio optimization problem be viewed as a long-term endeavor where their portfolio(s) will have to contend with the Law of diminishing returns (alpha decay), that the portfolios compensate for it or not. It is a matter of finding whatever trading techniques needed or could be found to sustain the exponential growth of their growing portfolio of strategies.

A trading portfolio can be expressed by its outcome:\(\;\) Profits \(\,\) = \(\displaystyle{\int_{t=0}^{t=T}H(t) \cdot dP}\).

The integral of this payoff matrix gives the total profit generated over the trading interval (up to terminal time T) whatever its trading methods and whatever its size or depth. Saying that it will give the proper answer no matter the number of stocks considered and over whatever trading interval no matter how long it might be (read over years and years even if the trading itself might be done daily, weekly, minutely, or whatever).

The strategy \(H_{mine}\) becomes the major concern since \(\Delta P\) is not something you can control, it is just part of the historical record. However, \(H_{mine}\), will fix the price at which the trades are recorded. All those trading prices becoming part of the recorded price matrix \(P\).

You can identify any strategy as \(H_{k}\) for \(k \subset {1, \dots, k} \). And if you want to treat multiple strategies at the same time, you can use the first equation as a 3-dimensional array where \(H_{k}\) is the first axis. Knowing the state of this 3-dimensional payoff matrix is easy: any entry is time-stamped and identified by \(h_{k,d,j}\) thereby giving the quantity held in each traded stock \(j\) within each strategy \(k\) at time \(t\).

How much did a strategy \(H_{k}\) contribute to the overall portfolio is also easy to answer:

\(\quad \quad \displaystyle{w_k = \frac{\int_{0}^{T} H_{k} \cdot dP}{ \int_{t=0}^{t=T}H(t) \cdot dP}}\).

And evidently, since \(H(t)\) is a time function that can be evaluated at any time over its past history the weight of strategy \(w_{k}\) will also vary with time.

Nothing in there says that \(w_{k}\) will be positive. Note that within Quantopian's contest procedures, a non-performing strategy (\(w_{k} < 0 \)) is simply thrown out.

Understandably, each strategy \(H_{k}\) can be unique or some variation on whatever theme. You can force your trading strategy to be whatever you want within the limits of the possible, evidently. But, nonetheless, whatever you want your trading strategy to do, you can make it do it. And that is where your strategy design skills need to shine.

Quantopian can re-order the strategy weights \(w_{k}\) by re-weighing them on whatever criteria they like, just as in the contest with their scoring mechanism and declare these new weights as some alpha generation "factor" with \(\sum_1^k a_k \cdot w_{k}\). And this will hold within their positive strategies contest rules: \( \forall \, w_k > 0\).

Again, under the restriction of \(\, w_k > 0\), they could add leveraging scalers based on other criteria and still have an operational multi-strategy portfolio: \(\sum_1^k l_k \cdot a_k \cdot w_{k}\). The leveraging might have more impact if ordered by their expected weighing and leveraging mechanism: \(\; \mathsf{E} \left [ l_k \cdot a_k \cdot w_{k} \right ] \succ l_{k-1} \cdot a_{k-1} \cdot w_{k-1} \). But, this might require that their own weighing factors \(\, a_k \) offer some predictability. However, I am not the one making that choice having no data on their weighing mechanism.

Naturally, any strategy \(H_{k}\) can use as many internal factors as it wants or needs. It does not change the overall objective which is having \(\, w_k > 0\) to be considered not only in the contest but to have it high enough in the rankings to be considered for an allocation.

Evidently, Quantopian can add any criteria it wants to its list including operational restrictions like market-neutrality or whatever. These become added conditions where strategy \(H_{k}\) needs to comply with, otherwise, again it might not be considered for an allocation.

The allocation is the real prize, the contest reward tokens should be viewed as such, a small for "waiting" reward for the best 10 strategies in the rankings: \( H_{k=1, \dots, 10}\,\) out of the \( H_{k=1 \, , \dots, \, \approx 300}\) participating.

@ Antony -

I don't think the fund would be averse to something well-known like the liquidity factor you mentioned elsewhere.

I guess I'd think just the opposite, since presumably it is in the "common" risk factor category, given that it is available cheaply via retail factor ETFs (and presumably in institutional products, as well). I'd think that Point 72 can get their liquidity factor fix more cheaply elsewhere.

Also, I think there are something like 40 factors in the Q fund already. I'd think that at least one of them would represent liquidity, but maybe not? I guess all that one can do is cook something up, submit it to the contest, wait six months, and then pester Q to have a look at it.

@ Guy -

Determining how much a given alpha factor contributes to the overall return may not be possible direcly. Generally, in the Alpha Combination step, the alpha vectors can be combined in any way one chooses, and post-combination there are no labels that propagate through the architecture to then attribute the relative contribution of a given factor to the total return; after Portfolio Construction and Execution, how would one sort out the actual contribution of a given factor to the whole?

It was possible to determine the relative weights under the prior fund-of-funds (or fund-of-trading-algos, more precisely) approach to the Q fund. Each algo traded separately, and the total return was simply the sum of the individual algos. As I understand, the new signal combination approach follows the architecture that was previously applied to each individual algo.

So, if one noodles on this a bit, then the question arises, how are the weights determined for compensating authors? If you have insights, feel free to share them on:

https://www.quantopian.com/posts/how-are-weights-determined-in-new-signal-combination-compensation-scheme

A good alpha factor is one that has a relatively high compensation weight (which determines the payout to the author) but how does the compensation weight relate to the fund construction--is it a parameter, or is the compensation weight determined separately from the fund construction?

Reference: https://www.quantopian.com/posts/a-professional-quant-equity-workflow

@Grant, you say: "Determining how much a given alpha factor contributes to the overall return may not be possible directly." True.

If Quantopian looked at individual factors from within a trading strategy, I would consider that the same as looking at the code, and that would certainly not be nice. However, they could look at that stuff only if the author gave permission, remember, it is your IP. I have never given that permission no matter whatever technical problems I have encountered. I either solved it myself or ignored it and moved on.

But, what is discussed is that Quantopian is making a "factor" out of the output of a whole trading strategy. And it looks only at the inventory held at EOD. This is quite different. Especially since they also want to reduce trade redundancies and conflicting orders within their chosen set of strategies.

A strategy as a "factor" could be describe as a time function: \(f_k(t) = w_k(t) = \int_{0}^{T} H_{k} \cdot dP\). Summing strategies would give \(\sum_1^k \int_{0}^{T} H_{k} \cdot dP\). And this is where you could extract the new portfolio weights:

\( \quad \quad \displaystyle{ f_k(t) = w_k(t) = \frac{\sum (H_k \cdot \Delta P)}{\sum_1^k \int_0^T H_k \cdot dP} } \)

These strategy weights have nothing to do with the internal factors used within a trading strategy. However, if you do generate alpha using strategy \(H_{k}\), it should reflect in the weights Quantopian attributes to that strategy within its portfolio of strategies \(q_k(t) \cdot w_k(t)\). This way scaling up or down the impact of a particular trading strategy relative to the whole portfolio. It is their choice, and it is easily understandable. If their weighing "factor" \(q_k(t)\) is any good, it should try to exploit the time periods where a strategy is positive and minimize its influence in times of low productivity.

The game is about money. And the question is: no matter what you do or how you do it, can you deliver alpha above your peers and most of all above market averages?

There seems to be a desire/requirement to have a certain dispersion in the alpha factor values. For example, see:

https://www.quantopian.com/posts/an-updated-method-to-analyze-alpha-factors#5d5a8bb111887c003ea67844

This is useful to make sure you are not equal-weighting.

What's the problem with equal-weighting? And if it is to be avoided, what spread in values is sufficient?

What's the problem with equal-weighting? And if it is to be avoided, what spread in values is sufficient?

I think when combining many algorithms, the closer the constituent position sizes reflect confidence in the strength of that position's alpha, the better the combined portfolio will turn out. At least in my head that makes sense. An equally-weighted portfolio may be giving too high a vote for its worst positions and too low a vote for its strongest position. Could lead to situations where the weakest candidates are accidentally promoted, if they happened to have been given inflated (equal) weight across many portfolios the overweighting will propagate.

Some additional guidance on the evil of an equal-weighted alpha factor (see https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019 ):

Use the optimizer with care: while it's tempting to squash all risk by relying on the optimizer, doing so can have very detrimental effects on your alpha. For example, one thing we often observe is that due to certain constraint settings the resulting portfolio ends up being equal weighted. Thus, your original factor that scores stocks where it has a lot of confidence highly and vice-versa is losing all that valuable sensitivity due to the optimizer. Remember that we only see your final EOD holdings, not your actual factor scores. Try to have your final portfolio be the most accurate representation of the original factor. To achieve this, you should use the optimizer as little as possible and not worry too much about exposures, especially if specific returns look good. Code-wise, you should not use MaximizeAlpha and instead TargetWeights. This is a good place to start: order_optimal_portfolio(opt.TargetWeights(weights), constraints=[])

So I think what this is saying is that the optimizer has the effect of converting to a single signed bit (-1 or +1 or 0):

alpha_opt = np.sign(alpha)  
alpha_opt = alpha_opt/np.sum(np.absolute(alpha_opt))  

Thus the factor sorts the QTU universe into three buckets: long, short, neutral.

One way to do the optimization would be to have an optimization algorithm that is just allowed to re-shuffle the stock rankings across the QTU. For example, if a portfolio consists of three stocks, A, B, and C, then a brute-force optimizer would be allowed to try these orderings to see if all of the constraints can be met:

ABC
ACB
BAC
CAB
BCA
CBA

Brute-force optimization could work for 3 stocks, but for a portfolio of even 100 stocks, it ends up being approximately 10^158 trials. One would need an optimization routine a lot more efficient than brute-force search.

A possible way to do the optimization for style risk factors would be like this:

alpha_optimal = alpha + w_m*momentum + w_mc*market_cap + w_v*value + w_mr*mean_reversion + w_vol*volatility  

The optimization problem then becomes trying to find the risk factor weights that reduce the style risk exposures to acceptable levels.

There's still sector exposure to deal with, but perhaps just running things over each sector separately and then combining would do the trick. Basically find alpha_optimal_s (as I define above, but only run for a single sector, s), and then combine by summing the alphas across all sectors (perhaps with a market cap weighting).

It is a mystery why Q didn't choose to fix the optimizer, versus just recommending not using it. Maybe there's no way to add an appropriate constraint consistent with using CVXOPT/CVXPY as the optimization engine? They invested a lot of precious capital in developing and supporting the optimizer, and now it would seem to be a very expensive boat anchor (unless it is useful to the Q fund construction).

@Grant, you say: “Maybe there's no way to add an appropriate constraint consistent with using CVXOPT/CVXPY as the optimization engine?”

The following thread would contradict that: https://www.quantopian.com/posts/reengineering-for-more. At least, provide quite an exception.

I have not tried CVXPY, but with CVXOPT you can let your program dictate the weights using factor combinations as you do, or use whatever else that has alpha generation, or you can take control and feed it what you can find even if it might have nothing to do with factors.

Anybody have a theory why Quantopian is interested in factors based on estimates? It is mentioned on these sites:

https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019
https://www.quantopian.com/posts/new-dataset-guidance

And there is a new mini contest devoted to it:

https://www.quantopian.com/posts/new-tearsheet-challenge-on-the-estimates-dataset

Estimates have been around a long time, right? And I'd expect they are kind of a commodity data set, available industry wide. Wouldn't it be reasonable to assume that any broadly diversified estimates factor would have suffered alpha decay a long time ago? What am I missing here? Or maybe Q is hoping for a more idiosyncratic alpha (e.g. estimates provide alpha for a particular slice of the market; a bone still with a bit of meat that may have been overlooked by the starving masses of analysts)?

Or maybe there is some "me too" motivation here? I have to think that the hedge fund industry isn't so secretive that nobody knows what the other guy is doing. And Q has a direct line to Point 72, too.

Just seems like at least broadly (e.g. across the entire QTU), an estimates-based factor would be effectively a common one (i.e. cheaply obtained from sources other than Q).

Estimates have been around a long time, right? And I'd expect they are kind of a commodity data set, available industry wide.

I've been wondering the same. As you've pointed out, the FactSet suggestions have been quite conspicuous of late. It always strikes me as odd when they get very specific in how/where they want us to find alpha. (Presumably the best alpha is where everybody least expects it.) Maybe I'm too cynical, but I was assuming it might be a corporate decision that has more to do with the business development relationship with FactSet than any data science/market insight. Or maybe a bit of sunk cost fallacy (e.g. "We've invested so much engineering and business dev into the FactSet integration, it better pay off."). Or maybe they know through word of mouth that other firms are getting juicy alpha from this dataset and they want in on it. All speculation though, of course.

@Viridian All the alpha I ever found was from other data sources or did not fit the contest criteria anyway, we'll see how successful Q is in the long term using their approach.

@ Viridian Hawk

it might be a corporate decision that has more to do with the business development relationship with FactSet than any data science/market insight

Yeah. Kinda odd. Thomas W. is no dummy, yet he just throws out estimates as a challenge without any real quantitative justification other than it being "alternative data" and new to Quantopian. Either he's just following orders, or did some homework on his own to justify the effort. I guess if you can get a bunch of eggheads to chew on it for the chance at making $100, what is there to lose. You've got the the platform--might was well use it. Of course, there's the risk that one could make the wrong conclusion that the innovative "crowd sourced" approach has found "new alpha" when it is just over-fit noise if it were analyzed with enough data over long enough time scale (the black-box nature of the approach can't help in this regard).

To me, it would make a lot more sense for Q to establish a running list of ideas, that they've somehow vetted, and give some justification for why they might be worth the effort of the crowd. The list could be in random order, so as not to inject bias. There ought to be a list of 50-100 ideas, if they expect to really leverage the potential of the new signal combination approach, rather than one idea at a time.

There's also an opportunity cost here, in that if the crowd is focused on Q-directed alpha sources, then they aren't looking for undiscovered alpha in the data.

This is kinda interesting:

Step-by-step guide to Vanguard’s factor construction
https://advisors.vanguard.com/iwe/pdf/FASFMTH.pdf

The document describes "How we build the single-factor portfolios, step-by-step." Pretty cool. At first glance, I'd say that their single-factor portfolio construction recipe could be replicated on Quantopian.

Does anyone know why we have companies issuing guidance and analysts providing estimates in the first place? The whole enterprise would not seem to add much fundamental value to providing goods and services that people are willing to pay money for--the basic point of business in the first place. There must be some historical context and trends here, too. Also, I suspect that there's a kind of standard algorithm for doing forecasting; it has been "reduced to practice" and everyone pretty much does it the same way (although at least on the analyst side of the equation, there may be claims of having some differentiating "secret sauce"). I'm also wondering the extent to which guidance/estimates efforts are automated, versus having rooms full of experts in green eye shades and plastic pocket protectors. My guess is that the actual work is mostly standardized and automated, but that humans effectively intervene to adjust the numbers (inject bias), based on some economic drivers (e.g. influence the company stock price to the advantage of the company or to the advantage of individuals whose pay is tied to the stock price, or get investors to buy/sell stocks, thereby generating commissions).

The whole thing would seem kinda silly and wasteful. We already have accountants reporting every three months and annually. Then we have people forecasting what the accountants might report. Maybe we need another layer of people forecasting what the forecasters might report?

@Grant,

Interesting observations, there's a little truth in all of them.

Maybe we need another layer of people forecasting what the forecasters might report?

Indirectly, our layer, is there to decompose what the accountants report and the analysts are interpreting /forecasting into a raw signal that verifies an existing alpha factor for a specific portfolio strategy. While most are probably standardized and automated, the search continues for some alpha signals that are perhaps unique and differentiated. Easier said than done.

@ James -

the search continues for some alpha signals that are perhaps unique and differentiated

One needs a quantitative test for "unique and differentiated" or we just have words. Just because one is using "alternative data" doesn't mean that a rigorous test can be skipped. There's a framework on https://advisors.vanguard.com/web/c1/factor/our-difference which is pretty good. I don't think there's anyway of getting around simply having enough time out-of-sample, and for low frequency, low Sharpe ratio factors, it is a long time to determine if a given factor is "unique and differentiated" or if it is just transient noise or a passing fad of words not backed by numbers.

For the estimates data Q is pushing, say I wanted 95% confidence that a given factor based on the estimates data were "unique and differentiated"--what would that test look like?

Don't be naive, of course there are different existing metrics that measure "unique and differentiated" and these are not mere words, they are measureable. Here's a good example: http://fastml.com/revisiting-numerai/

I'm pretty sure, Q is working on such metric, if they don't have already internally.

Thanks James. It is interesting that Vanguard has a liquidity ETF. At first glance, it looks like it might be an awful lot like their existing small-cap ETF, except 2.6X more expensive! It kinda gives the impression they've completely isolated the long component of liquidity and are offering it as an ETF. Not so much, I suspect (the same may be true of their other factor ETFs).

@Grant, read the short paper you referenced. It is all so very reasonable that we might have a hard time finding faults to the approach taken. It does maintain an economic rationale behind its active portfolio management methods.

However, performance-wise, the 4 funds mentioned that used those methods failed to show that they were superior.

None of the mentioned funds outperform a market benchmark, something like SPY for instance. Furthermore, overall returns are quite low (single-digit returns if they were positive) not to mention very very low daily trade volumes (less than 5,000 shares per day) making them almost unsuitable trading candidates.

Even though these ETFs are quite new to this scene and were designed for stable returns (read positive), they still managed greater than 20% drawdowns. I fail to see the merit of it all.

Just my 2 cents.

@ Guy -

...performance-wise, the 4 funds mentioned that used those methods failed to show that they were superior
None of the mentioned funds outperform a market benchmark, something like SPY for instance.

Taking Vanguard U.S. Liquidity Factor ETF (VFLQ) as an example, my concern would be that one is paying for what? It has a median market cap of its holdings of $4.9 billion, whereas its Russell 3000 benchmark has a median market cap of $73.9 billion. Guess what? Vanguard Small-Cap ETF (VB) has a median market cap of its holdings of $4.5 billion. I guess VFLQ is a "select" version of VB? How much of the performance of VFLQ could be captured with VB, at a much lower expense ratio? Is VFLQ just VB in a bottle with a different label? By the way, if you've never seen the movie Bottle Shock, I recommend it.

For any so-called factor ETF, I think this is what one has to wonder: Is the higher expense ratio justified, if it is more-or-less a re-spin of some other cheaper factor(s)?

Just curious...has anyone had any luck getting direct feedback on an algo from Q? If so, how did you manage it? Just send a backtest ID into Q support? Or a specific individual? Or something else? In the spirit of this thread, it would seem to be the most direct approach. Either a "thumbs up" or a "thumbs down" would be a direct binary metric.

Or maybe wait for the next mini contest? I've basically written off the main contest, since it doesn't conform to the most recent guidance (https://www.quantopian.com/posts/how-to-get-an-allocation-in-2019); I'd be concerned that it is just a waste of time now, and if the focus will now be on (monthly?) mini contests focused on vetted possible sources of alpha, why mess with the legacy contest?

Just stumbled across it:

https://open.factset.com/products/quantopian-enterprise/en-us

At the bottom, under "Related Products" there is an advertisement for "FactSet Estimates-Consensus" so one motivation for the mini contest may have been to promote a FactSet data set. Nothing wrong with that. "You scratch my back, I'll scratch yours" as the saying goes.

I added a link to my original post above:

Live Webinar: Winner Announcement and Tearsheet Review for the Estimates Dataset Tearsheet Challenge

Worth watching, as it gives concise explanations by example of what is needed in an alpha factor/signal. At the end, Thomas W. states that the winners will receive license agreements for their algos, which is a significant departure from prior practice of requiring at least 6 months of out-of-sample data. It also differs in that the individuals receiving licensing, and some information about their strategies were revealed--both probably good moves, in my opinion.

It is clear from the webinar that limiting the universe and various forms of risk management are generally undesirable. The idea is to run the factor across most/all of the QTU and not worry too much about common risk exposures (as they will hopefully average out upon factor combination at the fund level).

departure from prior practice of requiring at least 6 months of out-of-sample data

All estimates strategies have a one-year OOS due to the hold-out.

@ Viridian Hawk -

Well, a kinda-sorta, pseudo one-year OOS. The algos are black-box, so who knows if the estimates data are even being used. If it were my capital, I'd want to see some true OOS. I have to figure the licensing includes a gradual increase of the weight of the signals, perhaps increasing to a steady-state value after a couple years, when one would have better confidence.