Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Isolating Specific Returns

Quantopian Community,

I have got something really special in this algorithm, unfortunately it is dragged down by it's common returns. Is there a way to isolate only the Specific Returns. If possible could you share a code snippet that achieves this using the risk model? A constraint etc?

I have an annualised specific return of 33% (which gives a Sharpe of 7.2), with an annualised common return of -21%.

78 responses

I know this has been asked before, but an answer wasn't really given

Hi @Quant Trader:

A good starting point would be to check out the RiskModelExposure constraint, which you can pass in to order_optimal_portfolio. Here's an example of it in action:

from quantopian.pipeline.experimental import risk_loading_pipeline  
import quantopian.optimize as opt

def initialize(context):  
    attach_pipeline(risk_loading_pipeline(), 'risk_loading_pipeline')

def before_trading_start(context, data):  
    context.risk_loading_pipeline = pipeline_output('risk_loading_pipeline')

def place_orders(context, data):  
    # Constrain our risk exposures. We're using the latest version of the default bounds.  
    constrain_sector_style_risk = opt.experimental.RiskModelExposure(  
        risk_model_loadings=context.risk_loading_pipeline,  
        version=opt.Newest,  
    )

    order_optimal_portfolio(  
        objective=some_objective,  
        constraints=[constrain_sector_style_risk],  
    )  

By default, RiskModelExposure will place an 18% constraint on sector exposures, and a 36% constraint on style exposures. You can tweak the exposure limits to provide a different exposure cap for certain factors - for example, if you pass max_industrials=0.1 into the RiskModelExposure constructor, it'll cap your long industrial exposure at 10%. This post has a good example of the RiskModelExposure constraint in action.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi @Abhijeet Kalyan,

I have already applied these filters. What I don't understand is how my Average Factor exposures can be zero (or as close as makes no difference) but I am losing money consistently from these common style returns?

Alternatively, is there a way I can view common returns on a chart by itself (or specific returns) so I can work out what the nature of the losses are to see if I can synthesize a hedge against my common return losses?

 constrain_sector_style_risk = opt.experimental.RiskModelExposure(  
        risk_model_loadings=context.risk_loading_pipeline,  
        version=opt.Newest,  
        min_momentum = -0.01,  
        max_momentum = 0.01,  
        min_short_term_reversal = -0.01,  
        max_short_term_reversal = 0.01,  
        min_value = -0.01,  
        max_value = 0.01,  
        min_size = -0.01,  
        max_size = 0.01,  
        min_volatility = -0.01,  
        max_volatility = 0.01,  
    )  
order_optimal_portfolio(  
        objective=objective,  
        constraints=[  
            constrain_gross_leverage,  
            constrain_pos_size,  
            market_neutral,  
            sector_neutral,  
            constrain_sector_style_risk,  
        ],  
    )  

Here in these two sections I should be limiting my factor risk. However, when I look at the notebook I generate off the back of this, I notice that I am still losing a lot of money in common returns. How can I stop this from happening?

Or, even better, is there a way to flat out exclude financial services from the mix? This would cut out the majority of my losses.

Hi @Quant Trader,

To view your specific returns (or common returns) in isolation, you could use the attributed_factor_returns property on the backtest object. This would get you a dataframe of your daily (not cumulative) attributed returns, which you can use to isolate a specific return stream - for example, bt.attributed_factor_returns['specific_returns']. We're also working on changes that will allow you to view your specific and common returns directly in the backtest UI, so stay tuned on that front!

For limiting your financial services exposure, maybe adding in tighter-than-default min_financial_services and max_financial_services values to the RiskModelExposure constraint, like you did for the style factors, might help?

How is common returns calculated? Can financial common returns be hedged against with a long/short weighted XLF position, or is that not how the common returns work?

And if it is the case that common returns be hedged against with a long/short weighted XLF position, is there a way to see yesterdays financial common returns inside the algorithm and then use it to weight said position?

i.e.

order_target_percent(XLF, -financial_services_factor_exposure(1 day ago))

@Quant Trader I took a look at the performance attribution of this algo. It has an alert, "This algorithm has a relatively high turnover of its positions. As a result, performance attribution might not be fully accurate." I suggest that you use return-based performance attribution instead of using this position-based performance attribution.

For a return-based performance attribution, you could regress the algo daily returns on the daily common risk factor returns with combined L1 and L2 priors as regularizer. (http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html#sklearn.linear_model.ElasticNet)

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hello @Rene Zhang,

Thank you for your help, I managed to resolve the problem by making the algorithm run more than once per day.

 for i in range(1, 300, 5):  
        schedule_function(get_factor, date_rules.every_day(), time_rules.market_open(minutes=i))  
        schedule_function(allocate, date_rules.every_day(), time_rules.market_open(minutes=i+1))  

This had a double benefit, it reduced the common returns losses and simultaneously increased specific returns. Unfortunately it isn't eligible for the Quantopian Open because it has a daily turnover of 1885.0%.

@Quant Trader, those are really impressive numbers.

Would you be so kind as to re-run your presented backtest full tear sheet with the round_trips option turned on, as in:

bt.create_full_tear_sheet(round_trips=True)

Interested in viewing a few of those numbers. Thanks.

Hi Guy,

Is there a way to do that without generating the entire rest of the backtest? The notebooks been running for the last hour and hasn't finished yet, maybe it's due to the high turnover?

Alternatively, would you be happy for me to run the same backtest, but over a shorter time period (07-17 to 03-18) in order for the notebook to actually get somewhere?

@Quant Trader, you do not need to regenerate the whole backtest, only the line supplied after loading the backtest in your notebook.

Your strategy is generating alpha, it appears sustainable since your cumulative returns log-chart has an increasing spread above the SPY benchmark. And that is the alpha that one should be looking for.

It's the processing which is taking the time, I've just reused the same backtest I loaded in earlier, but the section where I pasted in

bt.create_full_tear_sheet(round_trips=True)  

has had the little star in the top left corner for the last hour and a bit.

Then, not enough memory !

Memory peaked at 31% usage. I'm just going to run a shorter backtest with a shorter refresh rate to try and slow it down. The returns may be slightly different, but the gist should still be the same

That should allow the round-trip analysis to load in faster hopefully.

Hi Guy,

At long last (the thing has literally been running for the last 3 hours). Here is the round trip analysis of a simplified version of the algorithm (had to slightly reduce rebalance rate to get this thing to process it this side of 2018 : P)

Hope it shows you what you want to see.

@Quant Trader, those are fantastic numbers. Great job. Impressive.

Who would ever want some 5-10% return after seeing numbers like those.

Add a little leverage (5-10%), not much. Allow a slight positive market exposure (5-10%), and you should see your strategy fly even more. The added performance will cover the small leveraging fees. Doing this, your alpha will turn exponential.

The chart of importance is the cumulative returns log-chart which sees the alpha spread smoothly increasing with time. Showing that your trading strategy does generate meaningful increasing positive alpha.

The only question I have is: the Performance Relative to Common Risk Factors section gives positive results while all the cumulative return factors are negative. How could they add up to something positive?

@Guy Fleury,

The only question I have is: the Performance Relative to Common Risk Factors section gives positive results while all the cumulative return factors are negative. How could they add up to something positive?

I'm not entirely sure about this to be honest, you'll have to ask Quantopian how that works, I would assume it's something to do with the specific returns I'm generating which are independent of common factors?

Thanks for your suggestion about leverage by the way, it would seem it disproportionately benefited the algorithm (not entirely sure how this is the case)

@Quant Trader, the answer is simple.

The real alpha illustrated in your cumulative returns log-chart is compounded over the entire period. And since you are using a percent of equity betting system, any extra money made available will be put to use by increasing the bet size.

Which brings me to: would have liked to see: round_trips=True to compare the numbers.

Somewhere in the Performance Relative to Common Risk Factors section there is something wrong. Someone from Q should provide answers.

Hi Guy,

I think this post from Quantopian answers your question.

https://www.quantopian.com/posts/new-tool-for-quants-the-quantopian-risk-model

Components of the Quantopian Risk Model

The deliberate, careful design of a risk model codifies a particular view of the market. The Quantopian Risk Model is designed to identify the particular risk exposures that are desired by our investor clients.

The risk model consists of a series of cascading linear regressions on each asset. In each step in the cascade, we calculate a regression, and pass the residual returns for each asset to the next step.

Sector returns - Our model has 11 sectors. A sector ETF is specified to represent each sector factor. Each stock is assigned to a sector. We perform a regression to calculate each stock's beta to its respective sector. A portion of each stock's return is attributable to its sector. The residual return is calculated and passed to the next step.

Style risk - We start with the residual from the sector return, above. We then regress the stock against the 5 style factors together. The five styles in the Quantopian risk model:

Momentum - The momentum factor captures return differences between stocks on an upswing (winner stocks) and the stocks on a downswing (loser stocks) over 11 months.
Company Size - The size factor captures return differences between big-cap stocks and small-cap stocks.
Value - The value factor captures return differences between expensive stocks and in-expensive stocks (measured by the ratio of book value of company to the price of the stock).
Short-term Reversal - The short-term reversal factor captures return differences between stocks with strong losses to reverse (recent loser stocks) and the stocks with strong gains (recent winner stocks) to reverse in a short time period.
Volatility - The volatility factor captures return differences between high volatility stocks and low volatility stocks in the market. The volatility can be measured in historical long term or near-term.
Once the sector and style components have all been removed, the residual is the specific return.

From this I'm pretty sure that the explanation for my returns is most of the returns I generate is independent of the common factors (i.e. it is all residual).

@Quant Trader, I would think not. It would mean inversely correlated to all 16 factors.

If such was the case, it would give your strategy an even much higher comparative value.

Have you tried the: round_trips=True thing? Still curious to see the numbers.

Here you go, same simplified algorithm I used earlier last time I showed you the round trips, but with 1.1x leverage this time

The increase in returns I would still describe as disproportionate. The percent profitable figure has increased by 1% from 55% to 56%, which I don't understand considering it's the exact same strategy.

I also don't understand why it can't be the case that my algorithm is inversely correlated to the 16 factors, does the evidence not point to this being the case?

@Quant Trader, there is an explanation for this which you find in the notebook:

Performance attribution is calculated based on end-of-day holdings and
does not account for intraday activity. Algorithms that derive a high
percentage of returns from buying and selling within the same day may
receive inaccurate performance attribution.

That completely slipped my mind, thanks for pointing that out!

I still don't understand how the leverage is influencing the percent profitable figure though?

@Quant Trader, like I said before, the alpha in your strategy is being compounded due to your method of play, albeit at a low rate, but still compounding. I have a formula for that somewhere. You have an expanding bet size which also help.

My questions are now centered around feasibility and sustainability.

Take your first notebook, its achieved CAGR rate is remarkable, outstanding.

It had a 567% CAGR. At that rate, a $10M portfolio would grow to over $130 billion in 5 years. By year 10, it would reach over $1 million billion. So, sustainability and feasibility are now the proper questions to ask. And because of those questions, what could you do to make it doable anyway, even if it is to a lesser extent? Planning what you want your strategy to do over the next few years will help you compensate for those issues, and improve your trading strategy even further.

The first thing to do is to do it with high market cap stocks only, at the moment it is running on the QTradableStocksUS() and simply picking the best stocks available, if the alpha remains it's still a viable strategy.

No one expects it to be able to achieve the 567% number, that would be remarkable (and impossible). What I'm going to do is live trade it with some demo money for a period of time to test it out of sample. I might also vary the base universes over an over again (small cap, mid cap, large cap, high pe, low pe etc) in order to test that there is actually something beyond over-fitting at play here. The logic behind the algorithm makes mathematical sense (in my mind at least) but it may just be a case of fortunate environments.

If I really wanted to put it to the test, a Monte Carlo system with a completely randomised universe, full of randomly generated time series, which can be traded by the strategy could be very effective tool to test it, this works because it isn't actually trading the stocks .

The one I ran with large cap stocks has been placed here, obviously something still exists, but the returns are far more realistic, it's still not something to be sniffed at though:

I would say the factor still exists. There are a few problems however, random leverage spikes. High negative factor exposure to size. Largely negative exposures to all daily sector factor exposures. Probably a more realistic backtest.

I also ran it over the 2008 GFC (just to see how it performed out of sample) although the drawdown was slightly larger, once again it performs well.

Fantastic performance and returns! Just out of curiousity, is this a Machine Learning algorithm and did you make sure that there is no forward looking bias?

@James Villa,

This algorithm isn't based on Machine Learning, it makes use of a (as far as I am aware) self discovered factor. There is no forward looking bias to the best of my knowledge, it makes use of the QTradableStocksUS universe and doesn't make any attempt to access future data, so unless there's a problem with the Quantopian backtester I don't see how it could come in.

A lot of the returns can however be attributed to small cap stocks, once those are removed the returns die off a bit, even then though it's still an acceptable algorithm (sharpe of 3 or higher depending on start date). It's returns really pick up when it's allowed to pick the best stocks in the universe it can.

@Quant Trader,

Thanks for your reply and info. With small caps, you can run into problems with shorting and liquidity. The one thing that strikes me the most, is how the returns are negatively correlated with all 16 factors.

@James Villa

I think what has caused that is the fact that this strategy is intraday. As @Guy Fluery pointed out:

Performance attribution is calculated based on end-of-day holdings and
does not account for intraday activity. Algorithms that derive a high
percentage of returns from buying and selling within the same day may
receive inaccurate performance attribution.

It may not be the case that it's actually negatively correlated with all factors. Of course, it would be great if it was : D

did you make sure that there is no forward looking bias?

I forgot to mention, I've been live trading it for about a week now (demo cash) to see if it still works, it's performing to the same standard so I think we can rule out look ahead bias. Of course, 1 week isn't a large enough sample size.

The major problem I've been having is Quantopian's speed, when I increase the rebalance period (4893.2% vs ~1800% daily), I get significantly higher returns.

@Quant Trader, and you should. This would be confirmed with:

bt.create_full_tear_sheet(round_trips=True)

;)

@Guy Fleury

I would like to do that every time, but it takes (what feels like) a million years to run that notebook due to the sheer number of positions it takes : D

@Quant Trader, there were two words of importance in my post. The first: should, as if it was evident that it should do so. The second: confirm, as if in yes it does. I would go for the answers. You should want to know what are the limits. How far can you push it which in a way will set your limits. Then, you can pull back to a level where you feel more comfortable (risk wise that is).

IMHO.

I've spent some time tinkering with the base algorithm, this is the final product. I'm very happy with it, I've sacrificed the outlandish returns for greater security, the greater security comes in the form of significantly reduced common returns losses, periods at which the factor exposures and style factor exposures are basically 0 and a (almost) consistent beta of 0.0.

What I found particularly interesting though is that the Annual returns also seem to be increasing exponentially, implying this is not only a source of alpha, but is a source of alpha which is becoming far easier to exploit.

Quant Trader,

I am intrigued - would you be willing to share what stock universe you are using (QTradableStocksUS?) and what your assumptions are for trading costs / slippage? It has been highlighted on this forum previously that you have to be very careful with HFT algos on Quantopian owing to the difficulty of accurately modelling the bid/ask spread (and, indeed, that it is possible to write an algo which is seemingly astronomical but in reality is simply catching the bid/ask).

If the algo is robust to universe changes (i.e. focusing on tradable and highly liquid stocks) and is scalable (produces a similar curve at $10m+ starting capital) whilst also incorporating conservative assumptions for slippage, you may just be on your way to becoming a billionaire.

Also, just to note, I would imagine that the "almost exponential" profile of the curve is more likely a compounding effect than the alpha source becoming easier to exploit over time.

Will

@Will van Es

Universe: QTradableStocksUS
Slippage: set_slippage(slippage.FixedBasisPointsSlippage())
Commission: (Nothing inputed here so the Quantopian default)

Return Curve: Similar curve at $10m+ but begins to tail off at higher values ($100m+)

When I was referring to the exponential growth in annual returns, I wasn't talking about the cumulative return curve, but the annual returns section lower down (between monthly returns and distribution of monthly returns).

I have re-run the algorithm with only the top 100 most liquid stocks allowed (filtered daily), the returns stay, but not necessarily as smooth (4.9 Sharpe vs 11.4 Sharpe).

@Guy Fleury,

As you've always asked for round_trips, here's a notebook with that included.

@Quant Trader, outstanding equity curve. I especially liked the 0.07 Gross Leverage. You have something valuable there. Hope Point72 sees the inherent benefits of adding your strategy to their mix.

Again, well done.

@QuantTrader,
Since you are doing intraday trading and the Q/Pyfolio Backtester/Metrics tools are not really set up for that,
I believe you should run the tools pvr and
pvr_chart that are in the last algo of @Blue's thread:

https://www.quantopian.com/posts/pvr

There he records max metric values on a minutely basis, and then reports the maxes daily.
Perhaps check out his other posts on the metrics subject also, as I've always found them useful.
A while ago, we tried some intraday algos, but always found measuring the results a problem on the Q platform.

Your results are great!...so good luck!
alan

2016-12-29 19:10 _pvr:108 INFO PvR 1.1042 %/day cagr 0.156 Portfolio value 53750412 PnL 3750412
2016-12-29 19:10 _pvr:109 INFO Profited 3750412 on 2695544 activated/transacted for PvR of 139.1%
2016-12-29 19:10 _pvr:110 INFO QRet 7.50 PvR 139.13 CshLw 47508344 MxLv 0.05 MxRisk 2695544 MxShrt -2695544
2017-06-30 19:10 _pvr:108 INFO PvR 1.1024 %/day cagr 0.162 Portfolio value 58075449 PnL 8075449
2017-06-30 19:10 _pvr:109 INFO Profited 8075449 on 2907004 activated/transacted for PvR of 277.8%
2017-06-30 19:10 _pvr:110 INFO QRet 16.15 PvR 277.79 CshLw 47508344 MxLv 0.05 MxRisk 2907004 MxShrt -2907005
2017-12-29 19:10 _pvr:108 INFO PvR 0.9560 %/day cagr 0.149 Portfolio value 61567479 PnL 11567479
2017-12-29 19:10 _pvr:109 INFO Profited 11567479 on 3200957 activated/transacted for PvR of 361.4%
2017-12-29 19:10 _pvr:110 INFO QRet 23.13 PvR 361.38 CshLw 47508344 MxLv 0.06 MxRisk 3200957 MxShrt -3200957
2018-03-27 21:00 _pvr:108 INFO PvR 0.9970 %/day cagr 0.154 Portfolio value 64116231 PnL 14116231
2018-03-27 21:00 _pvr:109 INFO Profited 14116231 on 3240047 activated/transacted for PvR of 435.7%
2018-03-27 21:00 _pvr:110 INFO QRet 28.23 PvR 435.68 CshLw 47508344 MxLv 0.06 MxRisk 3240047 MxShrt -3240047
2018-03-27 21:00 pvr:185 INFO 2016-07-01 to 2018-03-27 $50000000 2018-03-29 00:43 US/Pacific

@Alan Coppola As far as I'm aware there is nothing here I should be worried about. PvR significantly better than realised returns and no leverage spikes (like I was kind of worried about).

@Quant Trader, congratulations! And thank you for sharing your results.

When you say 'live demo' trading, do you mean Q's paper-trading environment, or an independent one via a broker? If it's Q's environment, there may not be any look-ahead bias, but do you think it's possible that you've found a 'bug' in their trading simulation, giving you unrealistic fills that you wouldn't necessarily get in the real live market (e.g. being able to buy the bid and sell the offer w/o any 'takers' crossing the spread)?

Also, you don't by any chance have a negative value for the 'commission' and/or 'slippage' costs? Sorry, but I had to ask. :)

I do hope you've found a true alpha factor that no-one else knows about, but it does seem a bit too good to be true in my view, and strange that market makers and other ultra low-latency HFT firms haven't already taken advantage of it. I hope I'm wrong though!

Congrats again and all the best!

Joakim

@Joakim Arvidsson (Cream Mongoose)

I'm 100% sure that I've not got a negative value for commission and slippage, as I said, I'm using the default for commission,and FixedBasisPointsSlippage() for the slippage.

There is a possibility that there's a bug in the backtesting environment, but it's also working in the Quantopian Live Trader (another one of their environments) so I think it's unlikely that the same error transfers across.

With regards to it being a HFT strategy, I wouldn't describe it as that, the majority of the high turnover is caused by me actively have to hedge common factor exposures on a minutely basis. If you go back up to the top, my turnover was minimal, but I was leaking money everywhere from common factors. The turnover increase was my answer to that. I would describe it as a normal Quant strategy which I had to greatly increase the turnover of to hedge it's risks.

Of course, it's hard to model market impact with a backtester, so it could be getting unrealistic fills, but even if it is, it's managing to still turn a profit trading only stocks with the highest liquidity, which should in theory allow the algorithm to trade in real life with very little impact.

@Quant Trader,

I'm curious to find out if you can change your algo's frequency to a daily timeframe instead of intraday and see if the trading logic still holds. If it does, I'll be truly impressed. In my mind, a truly robust trading system will do well in various frequencies/timeframe. Thanks.

@James Villa

If you go right up to the top, that strategy was on the daily timeframe, still performs, but not as good, primarily because the reason I made it intraday was to hedge out common factor exposure.

I've started getting an error though.

No JSON object could be decoded

What's going on with that? I'm getting these errors on Notebooks I've already run earlier with no problem.

@Quant Trader,

Is your algo reading a json file from an external source? The above error seem to imply that.

It isn't I think the error is on Quantopian's end, it's not something that happened before and it even happens when re-running a notebook which has worked earlier.

Even this algorithm throws a 'No JSON object could be decoded' error when I run a Notebook on it's backtest.

Hi Quant Trader,

Thanks for the heads up. We've reproduced this error and are investigating further.

Thanks,
Josh

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I get this error too when running the 'Contest Criteria Check' notebook:

0% ETA: --:--:--| |

ValueErrorTraceback (most recent call last)
in ()
1 # Replace the string below with your backtest ID.
----> 2 bt = get_backtest('[removed]')

/build/src/qexec_repo/qexec/research/api.py in get_backtest(backtest_id) 116 client.get_sqlbacktest(backtest_id),
117 progress_bar,
--> 118 backtest_id,
119 )
120

/build/src/qexec_repo/qexec/research/results.py in from_stream(cls, result_iterator, progress_bar, algo_id) 591 risk_packet = None
592
--> 593 for msg in result_iterator:
594 prefix, payload = msg['prefix'], msg['payload']
595

/build/src/qexec_repo/qexec/research/web/client.py in get_sqlbacktest(self, backtest_id) 132 with closing(resp):
133 for msg in resp.iter_lines():
--> 134 yield loads(msg)
135
136 def _make_get_live_algo_request(self, live_algo_id):

/usr/lib/python2.7/json/init.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 337 parse_int is None and parse_float is None and
338 parse_constant is None and object_pairs_hook is None and not kw):
--> 339 return _default_decoder.decode(s)
340 if cls is None:
341 cls = JSONDecoder

/usr/lib/python2.7/json/decoder.pyc in decode(self, s, _w) 362
363 """
--> 364 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
365 end = _w(s, end).end()
366 if end != len(s):

/usr/lib/python2.7/json/decoder.pyc in raw_decode(self, s, idx) 380 obj, end = self.scan_once(s, idx)
381 except StopIteration:
--> 382 raise ValueError("No JSON object could be decoded")
383 return obj, end

ValueError: No JSON object could be decoded

Hi, we've shipped the fix that was causing the get_backtest() call to fail. These notebooks should run smoothly for you now.

We've sorry for the inconvenience.

Thanks
Josh

https://www.quantopian.com/posts/any-suggestions-for-isolating-a-specific-common-return

Any suggestions? At this point I seem to be getting unbelievably lucky when it comes to my algorithms having some really impressive hidden features (it's not intentional)

Two weeks in to live trading, here are the results:

2018-04-12 21:00 pvr: INFO 2018-03-29 to 2018-04-12 $10000000 2018-04-12 04:12 US/Pacific
Runtime 9 hr 2.2 min
2018-04-12 21:00 _pvr: INFO QRet 0.97 PvR 23.44 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-12 21:00 _pvr: INFO Profited 96614 on 412091 activated/transacted for PvR of 23.4%
2018-04-12 21:00 _pvr: INFO PvR 2.3445 %/day cagr 0.274 Portfolio value 10096614 PnL 96614
2018-04-11 21:00 pvr: INFO 2018-03-29 to 2018-04-11 $10000000 2018-04-11 04:23 US/Pacific
Runtime 8 hr 51.7 min
2018-04-11 21:00 _pvr: INFO QRet 0.85 PvR 20.65 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-11 21:00 _pvr: INFO Profited 85111 on 412091 activated/transacted for PvR of 20.7%
2018-04-11 21:00 _pvr: INFO PvR 2.2948 %/day cagr 0.268 Portfolio value 10085111 PnL 85111
2018-04-10 21:00 pvr: INFO 2018-03-29 to 2018-04-10 $10000000 2018-04-10 04:09 US/Pacific
Runtime 9 hr 5.2 min
2018-04-10 21:00 _pvr: INFO QRet 0.76 PvR 18.35 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-10 21:00 _pvr: INFO Profited 75602 on 412091 activated/transacted for PvR of 18.3%
2018-04-10 21:00 _pvr: INFO PvR 2.2932 %/day cagr 0.268 Portfolio value 10075602 PnL 75602
2018-04-09 21:00 pvr: INFO 2018-03-29 to 2018-04-09 $10000000 2018-04-09 04:27 US/Pacific
Runtime 8 hr 47.8 min
2018-04-09 21:00 _pvr: INFO QRet 0.64 PvR 15.61 CshLw 9708565 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-09 21:00 _pvr: INFO Profited 64339 on 412091 activated/transacted for PvR of 15.6%
2018-04-09 21:00 _pvr: INFO PvR 2.2304 %/day cagr 0.260 Portfolio value 10064339 PnL 64339
2018-04-06 21:00 pvr: INFO 2018-03-29 to 2018-04-06 $10000000 2018-04-06 04:19 US/Pacific
Runtime 8 hr 55.2 min
2018-04-06 21:00 _pvr: INFO QRet 0.55 PvR 13.40 CshLw 9708472 MxLv 0.05 MxRisk 412091 MxShrt -412092
2018-04-06 21:00 _pvr: INFO Profited 55209 on 412091 activated/transacted for PvR of 13.4%
2018-04-06 21:00 _pvr: INFO PvR 2.2329 %/day cagr 0.260 Portfolio value 10055209 PnL 55209
2018-04-05 21:00 pvr: INFO 2018-03-29 to 2018-04-05 $10000000 2018-04-05 04:26 US/Pacific
Runtime 8 hr 48.0 min
2018-04-05 21:00 _pvr: INFO QRet 0.45 PvR 10.82 CshLw 9712598 MxLv 0.05 MxRisk 412023 MxShrt -412023
2018-04-05 21:00 _pvr: INFO Profited 44562 on 412023 activated/transacted for PvR of 10.8%
2018-04-05 21:00 _pvr: INFO PvR 2.1631 %/day cagr 0.251 Portfolio value 10044562 PnL 44562
2018-04-04 21:00 pvr: INFO 2018-03-29 to 2018-04-04 $10000000 2018-04-04 04:23 US/Pacific
Runtime 8 hr 51.3 min
2018-04-04 21:00 _pvr:INFO QRet 0.35 PvR 8.66 CshLw 9712598 MxLv 0.05 MxRisk 402375 MxShrt -402375
2018-04-04 21:00 _pvr:INFO Profited 34839 on 402375 activated/transacted for PvR of 8.7%
2018-04-04 21:00 _pvr:INFO PvR 2.1646 %/day cagr 0.245 Portfolio value 10034839 PnL 34839
2018-04-03 21:00 pvr:INFO 2018-03-29 to 2018-04-03 $10000000 2018-04-03 04:22 US/Pacific
Runtime 8 hr 52.9 min
2018-04-03 21:00 _pvr:INFO QRet 0.28 PvR 6.89 CshLw 9712598 MxLv 0.05 MxRisk 402375 MxShrt -402375
2018-04-03 21:00 _pvr:INFO Profited 27710 on 402375 activated/transacted for PvR of 6.9%
2018-04-03 21:00 _pvr:INFO PvR 2.2955 %/day cagr 0.262 Portfolio value 10027710 PnL 27710
2018-04-02 21:00 pvr:INFO 2018-03-29 to 2018-04-02 $10000000 2018-04-02 04:30 US/Pacific
Runtime 8 hr 44.2 min
2018-04-02 21:00 _pvr:INFO QRet 0.18 PvR 4.58 CshLw 9695138 MxLv 0.05 MxRisk 401540 MxShrt -401540
2018-04-02 21:00 _pvr:INFO Profited 18382 on 401540 activated/transacted for PvR of 4.6%
2018-04-02 21:00 _pvr:INFO PvR 2.2889 %/day cagr 0.260 Portfolio value 10018382 PnL 18382
2018-03-29 21:00 pvr:INFO 2018-03-29 to 2018-03-29 $10000000 2018-03-29 07:13 US/Pacific
Runtime 6 hr 2.0 min
2018-03-29 21:00 _pvr:INFO QRet 0.07 PvR 1.97 CshLw 9744756 MxLv 0.05 MxRisk 380081 MxShrt -380082
2018-03-29 21:00 _pvr:INFO Profited 7488 on 380081 activated/transacted for PvR of 2.0%
2018-03-29 21:00 _pvr:INFO PvR 1.9702 %/day cagr 0.208 Portfolio value 10007488 PnL 7488
2018-03-29 14:58 pvr:INFO 2018-03-29 to 2018-03-29 $10000000 2018-03-29 07:13 US/Pacific

@Quant Trader,

Had to dig in to my old notebooks and found an algo that had similar phenomena as yours, huge negative common returns with huge specific returns on a daily timeframe. Can you tell me how you hedge out those negative common returns? I know you had to change your timeframe from daily to intraday but how?

I dealt with the problem by greatly increasing the daily turnover, I think it depends on the nature of your alpha source though. My strategy benefited from reduced exposure to market movements and hence reducing the time I held positions for reduced my common return losses, I don't know the nature of your strategy though. Your returns could be tied to common returns however.

The method I used was to call the rebalance function multiple time per day using:

for i in range(1, 300, 2):  
        schedule_function(place_orders, date_rules.every_day(), time_rules.market_open(minutes=i))  
        schedule_function(close_orders, date_rules.every_day(), time_rules.market_open(minutes=(i+1)))  

The other thing (which I was beginning to experiment with, but I haven't got to work yet) was to create a pipeline which returned the factor exposure for the stocks in the universe, then to attempt to negate the exposures by ensuring the sum of the exposures of the stocks in your portfolio was equal to 0 (or at least those that you want to be 0)

e.g.

defensive_exposure = ConsumerDefensive()  
    combined_factor = (  
        -defensive_exposure.zscore()  
        )  

    pipe = Pipeline(  
        columns = {  
            'combined_factor':combined_factor,  
        },  
        screen = QTradableStocksUS()  
    )  
    return pipe  

and then running with the output. (I haven't got very far with this yet because I've been quite busy recently). I would assume this is how the Quantopian Risk Model works, but I'm hoping you can make it more effective by running it more often.

I guess we'll understand better once the risk model white paper is published, but my impression is that the risk model constraint (and beta constraint) relies on trailing indicators, and assumes out-of-sample, there will be stability (but of course there isn't). It works well for sectors, since stocks don't jump from sector to sector very much, but for style risks, it may work at a gross level, but there is no guarantee for a given slice of the universe and a specific algo that it will project forward correctly, based on trailing data. It is better than nothing, though, I suppose.

@Quant Trader,

Thanks for your reply. I think I want to keep my daily timeframe as I am not confident that intraday backtesting framework of Q would give a realistic outcome as @Joakim Arvidsson pointed out. Maybe I'll try your experiment with negating factor exposures. Also, do you use MaximizeAlpha or TargetWeights construct for optimization?

I've looked into this a bit more and I think I've worked it out:
Quantopian defines the momentum factors as:

The difference in return between assets on an upswing and a downswing over 11 months.

They do the same for all the other factors. What I am assuming is that they have created an algorithm for each factor and then calculate the correlation of your algorithm to the 'factor algorithms' to generate your factor exposure.

I am then assuming that your sector exposure is the stocks from each sector that you hold as a percentage of your total holdings.

They define common returns as:

Returns that are attributable to common risk factors. There are 11 sector and 5 style risk factors that make up these returns.

So what I am guessing is, it's:

Momentum Factor Algorithm = MF
Value Factor Algorithm = VF
Size... = SF
Volatility = VOF
Short-Term-Reversal = STF

Daily Common Return = ß(Momentum)*MF + ß(Size)*SF + etc...
and Common Returns is just a summation of this calculation over the time period you ran the algorithm for.

What I'm pretty sure specific returns are are stocks which generate returns above the average basket. As (I believe) they've created these algorithms based on the difference between two baskets of stocks, those which (using momentum as an example) are on an upswing and those that are on a downswing. As you're using a basket you're going to be getting an average difference. However, if you were to take the difference between the stock which is the most 'upswingy' and the stock which is the most 'downswingy' I think the difference in performance between this pairing (or any other non-basket pairing) and the 'momentum factor algorithm' would be calculated as Specific Returns.

This is just me trying working it out based on what Quantopian has given me, of course I'm still not sure.

So what I reckon you could do, if you could find out the basket of stocks the 'factor algorithms' are holding, you could tell your algorithm to buy these stocks in the weighting assigned by the overall logic of your original algorithm (i.e. if your factor exposure would have been 10% if you were running your algorithm, instead of doing that you buy a 10% exposure to the momentum factor algorithm) you could completely remove specific returns from the equation. Or if you wanted to completely remove common returns, you could find out what your factor exposure is, then take an opposite position the basket of stocks assigned to that factor and only get access to the difference in performance between the two (i.e. the specific returns).

I assume this is what the Quantopian Risk Model does. I would appreciate someone from Quantopian weighing in though as I'm still fairly uncertain as to how it works.

@QT,

I have been experimenting with negating factor exposures and its combination to get optimal risk/return of my alpha logic. I think we are on the same page with regards to how Q is modeling the various risk components to form their common returns. So, in theory, anything that is not accounted for by way of correlation to the various common returns routine is specific returns. Still trying different weighing schemes but slowly getting there.

@James Villa,

I've spent a bit of time writing an algorithm which attempts to eradicate the specific returns. I've done this using my best interpretation of how the Quantopian risk model works.

It's not perfect, but it works in my backtests, the problems are caused by me:
a) Not knowing the size of the basket Quantopian uses for calculating factor returns
b) The stocks in the basket have their own sector exposures which I haven't accounted for because it would end up being an infinite feedback loop if I kept on doing that

@QT,

I've spent a bit of time writing an algorithm which attempts to eradicate the specific returns.

Wouldn't you want to do the opposite and eradicate the common returns? Maybe just a typo on your part. What I think Q is looking for are specific returns attributable to alpha factors that are not attributable to common risks factors.

If you can isolate only the common returns (the returns which Quantopian defines) you can then go long your underlying strategy and short this strategy and the performance is the isolated specific returns.

Fine, if it works for you. I just see it the other way.

Quantopian defines specific returns as the difference between the strategy returns and the common returns. The only way you can isolate specific returns is by eliminating the common returns (making the common returns curve flat).

For example,

Common Returns Unconstrained

Common Returns Constrained

It would be better if the algorithm was good, but the constraint (when applied) does significantly impact common returns. Would be more effective if I had more knowledge regarding what Quantopian used to model the factors.

@QT,

The only way you can isolate specific returns is by eliminating the common returns (making the common returns curve flat).

Isn't this exactly what I said, "Wouldn't you want to do the opposite and eradicate the common returns?"

Yes, but to do that you first need to find a way to isolate the common returns, which is what I tried to do in the algorithm above

You might as well wait till they release the whitepaper on Optimize API with risk loading constraints because you are assuming their computations of common returns based on their narrative which might not reveal everything. See this discussion on this thread ...new-tool-for-quants-the-quantopian-risk-model

@Quant Trader. Very nice. I have to wonder though, why would you settle for less than 10% annual returns when you have seen 12164.1% annual return with a 22.83 sharpe and only 1.5% drawdown? Was there something fundamentally wrong with the high-performance version? I'ld go for it more and try and fix the problems without sacrificing all the returns. That's just me though. Good job anyways, that's the highest sharpe I can remember seeing.

I didn't read the entire thread, but it seems built on a misunderstanding (or at least opposite understand from the one I have).

Returns = Common returns + specific returns

Common returns are things like beta, big minus small, momentum, etc. -- all the "common" risk factors. Therefore, specific returns are whatever is not accounted for by Quantopian's common risk factor models.

Therefore, negative common returns doesn't mean that common returns is subtracting from your performance. It means that your returns go beyond simply no correlation to common returns but are rather inversely correlated to the common risk factor models, which is what you would expect if your returns come from a source unique to beta, big minus small, momentum, etc.

Correct me if I'm wrong.