Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
A Simple Downside Protection Model

Quantopians:

We had a timely piece on "Avoiding the Big Drawdown," launched August 13--a few business days before the recent market chaos.

Our approach to avoiding massive drawdowns is to focus on simple timing rules: absolute and trending asset class metrics. Our analysis of the downside protection model (DPM), applied on various market indices, indicates there is a possibility of lowering maximum drawdown risk, while also offering a chance to participate in the upside associated with a given asset class. We make no claims that this is the "best" or the most "complex" system out there. In fact, we don't want the "best or most complex" timing model--we want a simple, non-optimized, robust timing model that doesn't work all the time. If models work all the time, they won't work in the future. God I hate market efficiency.

Of course, claims of potential high returns with lower risk should always be scrutinized. To explore our concept further, we hooked up with James Christopher of Quantopian, outlined the nuts and bolts of our downside protection model, and told him to conduct his own experiment. James' analysis and results will surprise you. Interestingly enough, we are in the middle of a live "out of sample" test to determine if the DPM model can help us avoid the big drawdown. Only time will tell...(YTD results attached, thanks to James)

If you'd like to explore more of our asset allocation research concepts/ideas, here is a list of posts we've done on asset allocation. We'd love to see Quantopians reverse engineer and tear up these ideas...

Always in search of the truth!

Wesley R. Gray, PhD
CEO/CIO Alpha Architect

http://www.alphaarchitect.com/

11 responses

Alpha Architect Downside Protection Model 2003-2015 (same code as above)

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Took a look at the second of the two ...
Cash low -453,491 on 2003-08-01, however, errored out with "Something went wrong" early 2006 and I'm not planning to try again.

It had not yet made a profit even after 2.x years with calculation based on amount spent (was into margin).
Cash was as low as -200k as early as 2003-05-01 while context.account.leverage remains displayed at 1, donno why, maybe when the value is updated behind the scenes. Maybe it needs to update after every order or every fill or something. Edit: Due to Intraday Leverage

One way to be sure to guard against unintended margin is to track lowest cash.
Example:

def cash(context, data):       # Function just to keep this out of the way.  
    c = context  
    if 'cash_low' not in c:    # Instead, init this in initalize for efficiency.  
        c.cash_low = c.portfolio.starting_cash

    if c.portfolio.cash < c.cash_low:  
        c.cash_low = c.portfolio.cash  
        record(cash_low = int(c.cash_low))

def handle_data(context, data):  
    cash(context, data)  # or call using schedule_function  
                         #  (where data is required, that's why it's included as an argument)  

Better, use PvR

Tip: If scale is a problem with multiple custom chart items, you can toggle them off/on.
Try it right now. Click 'bond_exposure' above to turn it off, then back on.
If those were very different, the custom chart would re-scale for you.

+1 for the alpha architect website, and I see the Dr. takes part in discussions.

Hello ,
The idea of a downside protection is amazing;
I wonder if we could apply this idea to any trading algorithm, this can reduce the draw down.
For example, we can look at the cummulated P&L of the strategy as an Asset At , and appy the rules _MA_Rule and _TMOM_Rule to this asset and rebalance with cash when needed.
But the issue is that we should have access to the time serie of P&L of the original strategie in order to determine the new strategie=original strategie
+ DPM . So the original P&L is a fake P&L and is only used to determine if we trade or not the original strat according to rules MA and TMOM
Some one have an idea for how to do this in quantopian framework?
Please feel free to ask if my idea is not clear.

Thanks
idriss

I've been working on this over the weekend, I accomplished it by recording all my desired positions, prior to any equity curve trading, in a pandas.Series in the context. Then every day, I carefully calculate what they daily returns of that ideal portfolio would have been. Then I can compound those returns into the raw equity curve, and use that curve data to decide what scaling factor to apply to the actual trades.

Although assets are different in this code, the framework is that of Robust Asset Allocation from alpha right? http://blog.alphaarchitect.com/2014/12/02/the-robust-asset-allocation-raa-solution/

Thanks simon for your reply ,
I will try coding this and share some code.
But doest it work for you? does it reduce your drawdown as expected ?

thanks
idriss

It does reduce the drawdown yes, in fact, it completely changes the character of the trading strategy, but not necessarily for the better. The whole thing presupposes that your trading strategy itself has some predictable momentum/autocorrelation.

def drawdown(log_rets):  
    equity = 1+log_rets.cumsum()  
    drawdowns = -(equity - equity.cummax()).fillna(0)  
    return drawdowns.iloc[-1]

def update_phantom_returns(context, data):  
    if (len(context.phantom_allocations)):  
        closes = history(400, '1m', 'price')  
        phantom_allocation = context.phantom_allocations.iloc[-1]  
        closes = closes[phantom_allocation.index]  
        time = get_datetime().time()  
        last = closes.iloc[-1]  
        same_time = closes.loc[closes.index.map(lambda x: x.time == time)]  
        rets = np.log(same_time).diff().iloc[-1]  
        port_ret = (rets * phantom_allocation).sum()  
        context.phantom_returns[get_datetime()] = port_ret  
    else:  
        context.phantom_returns[get_datetime()] = 0.0  
    context.leverage[get_datetime()] = context.account.leverage  
    context.assets[get_datetime()] = context.portfolio.positions_value  
    context.equity[get_datetime()] = context.portfolio.portfolio_value  
    record(leverage=context.account.leverage)  
    context.actual_returns = np.log(context.equity).diff().fillna(0)  
    actual_dd = drawdown(context.actual_returns)  
    phantom_dd = drawdown(context.phantom_returns)  
    record(actual_dd=actual_dd)  
    record(phantom_dd=phantom_dd)        

def equity_curve_leverage_factor(context, data):  
    equity = pd.ewma(1+context.phantom_returns.cumsum(), span=10, min_periods=10)  
    # scale down if our equity curve starts dropping below  
    # bottom quantiles  
    ratchets = np.arange(0.0, 0.5, 0.1)  
    recent = pd.Series([pd.rolling_quantile(equity, quantile=x, window=60, min_periods=60).iloc[-1] for x in ratchets])  
    above_below = np.sign(-(recent - equity.iloc[-1])).fillna(0)  
    half_max = MaxLeverage / 2.0  
    ratchet_value = MaxLeverage / len(ratchets)  
    factor = min(MaxLeverage, max(0.0, (half_max + above_below.sum()*ratchet_value)))  
    return factor  

Maybe that will give you some ideas.

Cheers,

Simon.

I see the idea .
But how do you update context.phantom_allocations , from context.portfolio.positions?

Thanks
idriss

Here are the out of sample results since of the algorithm since it was published. It's important to note that the tangible results of this model are most apparent in the long term.

@James, another way to plot this is call get_backtest() to get the full backtest, and then call
bt.create_full_tear_sheet(live_start_date='YYY-MM-DD') to plot the in-sample in green, and the out-of-sample in red. I love pyfolio, and the API could use a page of (generated) documentation.

@James, how to avoid small rebalancing every month? If I run the code, I see in the transaction logs a few small buy or sell of 1-3 shares.
I want to avoid this as transaction fees can have a big impact then.
Is there a way to put a minimum transaction of, let's say, 500$ ?
Thanks