Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
MaximizeAlpha under the hood, and TargetWeights

This is to clarify what MaximizeAlpha does, that it doesn't allocate proportionally to the alpha signals we supply. It is reasonable to assume many of us have been operating with the belief due to wording like this in Lesson 7 Portfolio Management. Emphasis mine.

We will use MaximizeAlpha, which will attempt to allocate capital to assets proportional to their sentiment scores.

Shown in the backtest below, the sentiment scores are not used proportionally. Someone might want to edit that page.

Help, on the other hand is clear on this point in a few places:

Since MaximizeAlpha tries to put as much capital as possible to the assets with the largest alpha values, additional constraints are necessary to prevent the optimizer from trying to allocate “infinite” capital.

Without a constraint on gross exposure, this objective will raise an error attempting to allocate an unbounded amount of capital to every asset with a nonzero alpha.

Without a constraint on individual position size, this objective will allocate all of its capital in the single asset with the largest [supplied alpha value] expected return.

Other areas may have the potential of generating a different notion in the mind of the reader, maybe that their alpha values could have proportional significance:

Ideally, alphas should contain coefficients such that alphas[asset] is proportional to the expected return of asset for the time horizon over which the target portfolio will be held.

MaximizeAlpha takes a Series mapping assets to “alpha” values for each asset, and it finds an array of new portfolio weights that maximizes the sum of each asset’s weight times its alpha value.

Rather than any operation involving "times its alpha value", to be clear, MaximizeAlpha allocates up to PositionConcentration when present, most often the maximum position concentration, and stops allocating when it reaches a limit based on other constraints.

This backtest is cloned from the Lesson 7 link above with Optimization weights logging code added, show_opt_weights().
The source contains output for four trading days.

Observations

Excerpts

There are times when the alpha input signals are zero yet positions are modified rather than closed.

2016-01-11 06:31 show_opt_weights:224 INFO  
     alpha        old           new         pct  
  0.000000   -0.013653 =>  -0.010581        77.5%    Equity(19509 [PB])  
  0.000000   -0.014005 =>  -0.010853        77.5%    Equity(17448 [SLG])  
  0.000000   -0.014122 =>  -0.010880        77.0%    Equity(3128 [TGNA])  
  0.000000   -0.014578 =>  -0.011208        76.9%    Equity(25837 [MGLN])  
  0.000000   -0.014589 =>  -0.011170        76.6%    Equity(38760 [CLNY])  
  0.000000   -0.014614 =>  -0.011199        76.6%    Equity(1995 [CUZ])  
  0.000000   -0.014851 =>  -0.011311        76.2%    Equity(42811 [TUMI])  
  0.000000   -0.014468 =>  -0.010993        76.0%    Equity(46694 [IMS])  
  0.000000   -0.015006 =>  -0.011396        75.9%    Equity(21975 [ALE])  

Some existing positions with alpha signals are closed. (Surely no longer among the strongest signals)

Close  
     alpha         old          new  
  0.047475    0.007497 =>          0  Equity(301 [ALKS])  
  0.593333    0.015064 =>          0  Equity(5303 [NHI])  
  0.486667   -0.016469 =>          0  Equity(5634 [OKE])  
  0.140000    0.013884 =>          0  Equity(5769 [PBCT])  
  0.000000    0.015034 =>          0  Equity(6190 [PSB])  
 -0.136667   -0.013643 =>          0  Equity(8233 [WNC])  

Indicators to swap short/long instead closed (same as above).

     alpha         old          new  
 -1.066667    0.011697 =>          0  Equity(1942 [CTB])  
  1.900000   -0.013903 =>          0  Equity(3219 [GK])  
  0.750000   -0.013526 =>          0  Equity(6297 [QDEL])  
  2.230000   -0.014115 =>          0  Equity(8278 [WIBC])  
  0.273333   -0.013332 =>          0  Equity(13508 [CLB])  
  1.890000   -0.013395 =>          0  Equity(16059 [NUS])  
  1.686667   -0.003494 =>          0  Equity(17632 [CHRW])  
  1.266667   -0.013662 =>          0  Equity(17646 [DRQ])  
 -0.090000    0.014392 =>          0  Equity(32714 [LDOS])  
  1.900000   -0.014776 =>          0  Equity(33879 [TRS])  

Meanwhile it is also important to understand that weights are not always max_pos_size. Here, two are lower, surely due to other constraints that apply to them.

Open  
     alpha         old          new  
 -3.100000           0 =>  -0.015000  Equity(474 [APOG])  
  2.890000           0 =>   0.015000  Equity(557 [ASGN])  
  2.500000           0 =>   0.013172  Equity(1942 [CTB])  
 -3.100000           0 =>  -0.015000  Equity(1995 [CUZ])  
  2.890000           0 =>   0.015000  Equity(3037 [FSS])  
 -3.100000           0 =>  -0.015000  Equity(3128 [TGNA])  
 -3.100000           0 =>  -0.015000  Equity(3219 [GK])  
  2.500000           0 =>   0.013423  Equity(3424 [AJRD])  

So MaximizeAlpha can kind of be thought of as Maximize_Position_Concentration_for_Just_the_Strongest_Alpha_Signals

The Lesson 7 algo invests in 60 to 80 of 2000+ coming from pipeline.

4 responses

I've been struggling with this, since TargetWeights was causing me grief due to the QTU issue. The upshot is algorithms using MaximizeAlpha are more volatile than they need to be because of the increased position concentrations.

Scoring high on Quantopian depends so much on low volatility, and the only effective way to achieve low volatility (that I know of so far) is through a diversified portfolio. However, bounding by the position concentration constraint leads to a more-or-less equal-weighted portfolio that no longer favors your top ranked, which isn't ideal either. You probably want to give the higher-rated positions more weight in the portfolio.

It seems like it would be advantageous to have some middle ground between equal-weighted and going all-in on the top rated. Is there no way to specify position concentration constraints on a per-symbol basis?

I'm just thinking aloud here, but it seems like it could also be cool to be able to specify a confidence value in addition to each alpha estimation. Lets say your alpha factor is coming from a linear recursion, so you could feed the optimizer the R^2 as well and it would use that to decide it wouldn't put all the eggs in just a handful of baskets when you're not very confident, and when you are very confident in the leading stocks then it could increase position concentrations. Implementing this is probably nontrivial though.

I've been struggling with this, since TargetWeights was causing me grief due to the QTU issue.

My understanding is that there is a bug that results in positions being held that should be zero. Reportedly, it is non-trivial to fix, and is relatively low on the engineering priority list. Q has been herky-jerky in their responsiveness lately; it would be nice to have some clarity on the precise nature of the problem, and when it might be fixed.

MaximizeAlpha's is sort of all-in (up to position concentration & other constraints) on highest alpha weights we supply and discarding the rest.
In my experience, switching over to TargetWeights (normalized) can result in many more stocks and lower volatility yet quite a bit lower returns sometimes.

In norm() here, try trim at different values, it then normalizes again. One might be going the ideal route already of using percentile_between() in pipeline to screen weaker signals, this merely another option for testing. TargetWeights doesn't require any constraints at all, try removing PositionConcentration when using this.

def norm(c, d):    # d data, it's a series, normalize it pos, neg separately  
    # dont return to the same df or it would create nans  
    d = d[ d == d ]    # insure no nans  
    if d.min() >= 0 or d.max() <= 0:  # rare, if all pos or neg, shift to 0 in middle  
        d -= d.mean()  
    pos  = d[ d > 0 ]  
    neg  = d[ d < 0 ]  
    pos /=   pos.sum()   # first normalize  
    neg  = -(neg / neg.sum())  
    do_trim = 1    # nullzone for taking out the weaker signals  
    if do_trim:  
        trim = .40    # total ratio each to remove  
        pos  = pos.sort_values(ascending=False).head(int(len(pos) - trim * len(pos)))  
        neg  = neg.sort_values(ascending=False).tail(int(len(neg) - trim * len(neg)))  
        pos /=   pos.sum()        # re-normalize  
        neg  = -(neg / neg.sum())  
    pos *= .5  # for leverage target 1.0  
    neg *= .5  
    return pos.append(neg)

def trade(context, data):  
    order_optimal_portfolio(  
        objective = opt.TargetWeights( norm(context, context.output.alpha) ),  

Also thinking out loud, imagine 10 or 20 alphas from pipeline and each day a report on how each stock correlates to each of the [perhaps predictive] alpha values from last week, a pattern might be found on which stocks would be weighted optimally on which alphas.

In TargetWeights normalized, with 500 stocks there would surely be some weights indicating less than 1 share, thus not ordered by opt.

Testing normalization of each sector individually followed by re-normalization to have stocks from each sector.

Toward middle ground between MaximizeAlpha and TargetWeights:
MaximizeAlpha (maxing PositionConcentration for most extreme weights until out of room, thus weaker signals are not ordered yet weighted largely the same, aside from the effect of other constraints) and
TargetWeights with differing weights, while still not ordering weaker signals.

Returns are not the point here. Replace these pipeline factors for very different results, they are placeholders. And the trim ratio makes a big difference.