Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Beta Zero-Targeting - Automatic - Never worry about Beta again

A Beta value closer to zero helps favorably in the contest. Beta is volatility compared to market volatility.

This code shows a way to move Beta toward zero automatically (using spy, tlt as examples) and also how to calculate Beta.

An example of dynamically adjusting long, short instead.

'''
    Zero-Beta targeting example.  
    Automatically adjust proportions of spy and tlt to hold Beta to around 0.0 or beta_target.  
    c.beta_limit is one strictness adjustment, there are others.  
    In terms of *effect* on Beta generally:  
      - Longs (many) tend to be like SPY (increase)  
      - Short often acts similar to TLT here (decrease)  
'''
import pandas as pd

def initialize(context):  
    c = context  
    c.spy          = sid(8554)  
    c.tlt          = sid(23921)  
    c.beta         = 1.0    # Assumed starting beta  
    c.beta_target  = 0.0    # Target any Beta you wish  
    c.beta_limit   =  .01   # Pos/neg threshold, balance only outside of this either side of target  
    c.spy_limit_hi =  .95   # Max ratio of spy to portfolio  
    c.spy_limit_lo = 1.0 - c.spy_limit_hi  
    c.beta_df      = pd.DataFrame([], columns=['pf', 'spy'])  
    schedule_function(balance, date_rules.week_start(), time_rules.market_open())

def balance(context, data):  
    c = context  
    if not c.portfolio.positions:   # Initial positions to start, reusing spy_limit  
        order_target_percent(c.spy, c.spy_limit_hi)  
        order_target_percent(c.tlt, c.spy_limit_lo)  
        return

    beta = calc_beta(c)  
    bzat = beta - c.beta_target     # bzat is beta-zero adjusted for target  
    if (c.beta_target - c.beta_limit) < beta < (c.beta_target + c.beta_limit):  # Skip if inside boundaries  
        return

    # -------- Adjust positions to move toward target Beta --------  
    pos_val   = c.portfolio.positions_value  
    spy_val   = c.portfolio.positions[c.spy].amount * data.current(c.spy, 'price')  
    spy_ratio = spy_val / pos_val

    # Reduce spy & increase tlt or visa-versa  
    # The further away from target Beta, the stronger the adjustment.  
    # https://www.quantopian.com/posts/scaling for explanation of next line ...  
    temperance = scale(abs(bzat), 0, .30, .35, .80) # Not straight Beta, a portion of it.  
    adjust     = max(c.spy_limit_lo, spy_ratio - (bzat * temperance))  
    adjust     = min(c.spy_limit_hi, adjust)  # spy ratio no higher than spy_limit_hi  
    log.info('b{} spy {} to {}'.format('%.2f' % beta, '%.2f' % spy_ratio, '%.2f' % adjust))  
    order_target_percent(c.spy, adjust)  
    order_target_percent(c.tlt, 1.0 - adjust) # Remainder for tlt

def before_trading_start(context, data):  
    c = context  
    c.beta_df = c.beta_df.append({    # Beta calc prep  
            'pf' : c.portfolio.portfolio_value,  
            'spy': data.current(c.spy, 'price')}, ignore_index=True)  
    c.beta_df['spy_chg'] = c.beta_df.spy.pct_change()  
    c.beta_df[ 'pf_chg'] = c.beta_df.pf .pct_change()  
    c.beta_df            = c.beta_df.ix[-252:]    # trim to one year

def calc_beta(c):   # Calculate current Beta value  
    if len(c.beta_df.spy.values) < 3: return c.beta  
    beta = c.beta_df.pf_chg.cov(c.beta_df.spy_chg) / c.beta_df.spy_chg.var()  
    record(beta_calculated = beta)  
    return beta

def scale(wild, a_lo, a_hi, b_lo, b_hi):  
    ''' Based on wild value relative to a_lo_hi range,  
          return its analog within b_lo_hi, with min b_lo and max b_hi  
    '''  
    return min(b_hi, max(b_lo, (b_hi * (wild - a_lo)) / (a_hi - a_lo)))

19 responses

The backtest. Note Beta values hugging zero in the custom chart.

Now this is awesome, I've always wondered how to actually get beta into a usable format that you can adjust.

Yeah there's a lot of awesome to like here, the clean and clearcut code, its tailorability, the dynamic way it automatically relaxes near zero.
Plus the example (using just SPY and TLT) even happens to beat the benchmark and does great in '08.

With some allocation apportioning it can theoretically be added to existing algos easily at 20% or whatever. (Change c.spy_limit_hi and c.spy_limit_lo). As a start for testing. There's a discussion about preferred Beta practices.

You'll find this to be instructional and a useful tool for development. Like everything else, not the finish line, you tailor to your liking.

Once you're no longer in an interpersonal cage-match with Beta you can focus on the strategies that might be attractive to Quantopian for an allocation, technical, fundamental, equally long and short top and bottom X-percentiles, sectors, leverage, cash management, returns, alpha, sharpe and other metrics, various date ranges including downturns, grappling with partial fills and so on.

You'll notice for example if you're heavily SPY-weighted from this you can afford to bring in more high Beta stocks instead and you can use basics in this code for that to happen automatically. And the beta calc code can be adapted as one way to know betas for individual stocks as well.

The scaling example applies a stronger force the further something is, away from the target, to a threshold.

Huge applications in the institutional arena

How do you tune the way it scales beta based on TLT specifically? I feel like this is a weakness because when I attempt to change TLT to any other inverse SPY ETF or change SPY to any other high beta ETF it fails to bring the Beta down to 0. It's almost like it's tuned to SPY/TLT pair only.

Are these the values that need to be adjusted? I read the thread at https://www.quantopian.com/posts/scaling and still couldn't understand why those numbers specifically

temperance = scale(abs(bzat), 0, .30, .35, .80)  

PSQ, DOG, SH, MYY, SBB, RWM, QID, DXD, SDS, MZZ, SDD, TWM, SQQQ, SDOW, SPXU, SMDD, SRTY
Those are all inverse SPY and they all work fine in place of TLT to bring Beta to zero.

The variable bzat (beta zero adjusted for target) is in case someone wants to target a beta of 1.2 or something other than zero (say you're an investment institution and have a client that is fine if the volatility of their investments go higher than the market for example).
The code can be simplified if you want to remove that. Then it will only target 0, however will be easier to read. So that line above would become:

temperance = scale(abs(beta), 0, .30, .35, .80)  

That is saying ...
For the current beta (turned into a positive value), wherever it falls in the range 0 to .30 (I chose .30 because it is the contest limits for beta badge), find its matching point proportionally between .35 and .80, empirical.
Then the adjustment to SPY and TLT percentages is made based on just that resulting ratio portion of the current beta value.
If beta is 0, temperance will be some value however it will be multiplied by that 0 beta so SPY won't be adjusted nor TLT.
If beta is -.30 or below, or +.30 or above, then SPY is adjusted at .80 (80%) of the beta value.
Proportional for everything in between.
Works great.

Anyone who has questions can also feel free to click my name and send me a message.

The backtest code also includes this because I use it all the time so I can see clearly. To turn that on just comment out 'return' in handle_data().

Ah yes, that explains it better. The scaling is pretty important, without it it doesn't do as well. So If I changed the asset i would need to try and find the right scaling.

By the way, I'm a big fan of PvR too. I took the liberty of modifying your code slightly to make a 'stand-alone' version of your tool like PvR so that people can use it the same way. I made a few modifications as follows:

  • Improved ordering since order_target_percent() is unreliable
  • Error messages
  • logging/graphs
  • removed spy high/low limit, I think this worsens the returns but makes it a more "pure" beta adjuster

It's not fully tested, especially how it interacts with another algorithm that already uses SPY/TLT but here it is anyways.

@LukeIzlar, why is the order_target_percent() unreliable?

I'm not entirely sure the reason but as far as I can tell it's unreliable in live trading because it trades without taking current orders into account and it doesn't respect leverage limits. I can tell an algo exactly how many shares to order() and it will execute it perfectly, if you record leverage that it will be nearly a flat line at .98 - 1.0. If you use order_target_percent() in a complicated algorithm it can go all over the place, depending on how often and how large your rebalncing is. In some of my backtests I've seen it try to go as high as 1.5 leverage when I was doing a large rebalance. I had to tell my algorithm specifically to sell X shares of A then buy Y, wait for it to execute shares of B in order for it to properly execute.

A small backtest for working with just the beta calculation only.
Buying SPY once. Set input capital to 1040 if you would like to see the result with no margin and 100% cash usage.
This backtest can also be educational. Or pipeline beta for each security examples.

I've tried to shorten beta calculation to 1, 3, 6 months and is no change in the simulation. Is that expected?

context.beta_df.ix[-252:]
to:
context.beta_df.ix[-21:]
context.beta_df.ix[-63:]
context.beta_df.ix[-126:]

This is awesome

At the top of this thread, Blue Seahawk writes: "A Beta value closer to zero helps favorably in the contest". In the rules of the competition it states explicitly: "Your algorithm's performance must have low correlation to the general market's performance. This correlation is calculated as the beta-to-SPY, and it must be between 0.3 and -0.3".

This leads me to some questions:

1) Clearly the rules as stated indicate a hard condition boundary of +/- 0.3 for beta, otherwise we are out. However, within that allowable range, how exactly does the scoring depend on beta? Is it linear or non-linear, and how quickly does the penalty rise as we deviate from the ideal beta=0?

2) I have been assuming, as Blue Seahawk says, that ideally I should be targeting beta=0. However is that necessarily the best strategy? Maybe not. If the penalty for deviating from beta=0 is not TOOO severe (could someone on Quantopian competition judging staff please quantify?), then in fact there may perhaps be some distinct advantage in DELIBERATELY allowing beta to wander from zero depending on overall market conditions (while of course remaining within the allowed +/- 0.3 band).

Here is an example of what I'm thinking about with regard to Equity Long-Short strategies: Until now I have been trying to keep my Long & Short positions balanced so as to maintain beta close to zero. However if the market goes up then perhaps I should weight my positions as far as possible to the Long side (while still remaining within the allowable beta band) to squeeze as much as possible out of the bullish market move. Conversely, if the market goes down then perhaps I should weight my positions as far as possible to the Short side (while also remaining within the allowable beta band) to squeeze as much as possible out of the bearish market move. This would necessitate a mechanism to switch the net portfolio weighting bias based on market direction, and the viability of that idea would depend on how Quantopian competition scoring weights return & risk (Sharpe, Sortino, etc) on one hand vs. deviation of beta on the other hand.

So, some more questions:

3) Does anyone know exactly how Quantopian competition scoring weights return & risk (Sharpe, Sortino, etc) on one hand vs. deviation of beta from the ideal zero on the other hand? Quantopian staff, can we have this info?

4) Is this a legitimate idea, or would it be considered "gaming the system" for the competition? I would assume not because, if Quantopian judges specify exactly what they really want and how they weight the various components, then this is simply providing a more accurate statement of Quantopian's ideal objective function, which is presumably exactly what we are trying to achieve in the competition.

Comments / help / ideas, anyone, please!
Best wishes, Tony.

Those are some things we need to think about and are addressed to Q. They'll be announcing some changes to scoring soon.

I just want to point out that at the time of my original post, Optimize API (most recent) which had been available for a couple of months was not on my radar yet, it is what I'm using now for no beta worries. Meanwhile I'm using scaling currently in three other ways and updated that a few days ago. Although the one above has an extra layer of a stronger pull the further away from target. I wonder if np.log() or std() or something could simplify it.

Hi Dan, thanks for re-directing me to the thread entitled "Betafishing".

Likewise, thanks Blue.

Cheers, best regards, Tony

I am actually getting a beta of 1.09 when i run the 1st algorithm for the interval [08/30./2010] to [12/31/2014]

thanks
-kamal

Wait. That's just my June algo without scaling, not the 1st one with.

By the way, I think Q calculates beta at end of day. Scheduling calc_beta then should be a better match to their value.