Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
New Feature: Comprehensive Backtest Analysis (pyfolio)

Today, we are happy to announce pyfolio, our open source library for performance and risk analysis! We originally created this as an internal tool to help us vet algorithms for consideration in the Quantopian hedge fund. Pyfolio allows you to easily generate plots and information about a stock, portfolio, or algorithm. It's available now on our research platform, and it makes it really easy to dive deep into analyzing your backtests. For more information, see the pyfolio website.

Pyfolio is also open source! Here's its GitHub page.

We've attached an example notebook below (this is also available in the tutorials folder within research).

Contributors:
Gus Gordon ([email protected])
David Edwards ([email protected])
Justin Lent ([email protected])
Sepideh Sadeghi ([email protected])
Jessica Stauth ([email protected])
Karen Rubin ([email protected])
Thomas Wiecki ([email protected])

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

100 responses

Here is the accompanying backtest used to create the notebook.

There's also an early release of pyfolio available on the Python Package Index (PyPI). If you have pip, the standard python package installer, you can download the release with:

pip install pyfolio  
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

This is a tremendous initiative. Makes me happy, thanks!!

This project is a fantastic move towards portfolio analysis in a statistical fashion! I really appreciate the effort the whole team has put into it!

I'm most interested in the Bayesian analysis functions this package provides. Is there a way that I can get those Bayesian analysis results in numbers besides plots? More specifically, are there functions to extract Bayesian analysis metrics just like the following?

returns, positions, transactions, gross_lev = pf.utils.extract_rets_pos_txn_from_zipline(backtest)

AD & Andrew, thanks for the encouraging words!

@Andrew: The Bayesian part is still the most experimental (although it has been quite useful to us already). There are functions that run the models individually and return a trace object that contains samples from the posterior. A histogram over these samples provides the posterior distribution that is shown. For more details, see the doc strings in pyfolio.bayesian (https://github.com/quantopian/pyfolio/blob/master/pyfolio/bayesian.py). We still have to put up the compiled doc strings in html. For example, you could run:

# Run model that assumes returns to be T-distributed  
trace = pf.bayesian.run_model('t', returns)  
# Check what frequency of samples from the sharpe posterior are above 0.  
print('Probability that Sharpe ratio > 0 = {}'.format((trace['sharpe'] > 0).mean())  

@Thomas: The doc string and source code are extremely helpful! I really appreciate your great info.

Very awesome. Additional analytics tools are a huge help. I'll dig into this right away. Thank you, folks!

Amazing!

Also note that pyfolio has functions that allow you to calculate Sharpe, Beta, etc.: https://github.com/quantopian/pyfolio/blob/master/pyfolio/timeseries.py#L346

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Nice!! While the tear sheets are running for me, I am still getting this error below - any idea what I am doing wrong? backtest ID#55dbf2424834f60da9d59f2c (feel free to access it if you can)

/usr/local/lib/python2.7/dist-packages/pyfolio/utils.py:140: UserWarning: Could not update cache /usr/local/lib/python2.7/dist-packages/pyfolio/data/factors.csv.Exception: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pyfolio/data/factors.csv' UserWarning)
/usr/local/lib/python2.7/dist-packages/matplotlib/cbook.py:133: MatplotlibDeprecationWarning: The "loc" positional argument to legend is deprecated. Please use the "loc" keyword instead. warnings.warn(message, mplDeprecation, stacklevel=1)

Umar: Those are just warnings which you can safely ignore. We'll remove them eventually.

Has anyone looked into exporting tear sheets from ipython notebook??

Tyler: You can export to various formats. What I do sometimes is to run a tear sheet (or multiple) and then do Notebook->Download As->HTML which contains everything. Or perhaps you mean something else?

Thomas: Thanks for the quick reply. I'm regarding more of a presentation format to be easily printed. Maybe downloading into PDF via LaTeX? I've been receiving an error when trying that though. Maybe you have some thoughts?

Tyler: You can also try Notebooks -> Print Preview and then save to any format your browser supports.

Very.... "revealing" analysis. Thank you very much for that!

How do I find my backtest's ID for use in the Pyfolio notebook?

open your backtest and in the browser's url address you can see it:

https://www.quantopian.com/algorithms/xxxxxxxxx/yyyyyyyyyy

backtestid = yyyyyyyyyy

Thanks, Luca!

Tim, you can also run a tearsheet on any shared algorithm. The algo ID can be found in the first row of any forum shared algorithm's source-code.

I notice that the CAGR, Sharpe, Sortino, Alpha, Beta, etc. ratios reported in Pyfolio are quite different from those reported on the top of Full Backtest.

If someone would explain this variation I would appreciate that!

It would also be useful to put the CAGR on the top of the Full Backtest, yes?

Thank you.

Tom, that's a great tool. I just used it to compare Jamie Lunn's long only volatility strategy to my 3-asset class mod. In the process, I think I demonstrated a nice application of pyfolio. I just posted my side-by-side comparison in the "worthy of Q fund" thread. I think once developers use pyfolio, they'll never go back to "...twiddling with parameters in a backtest," as Simon Thornington recently wrote.

Thanks Derek. You can also run it on a zipline algorithm run on research. See here for more information on how to do that: http://quantopian.github.io/pyfolio/zipline_algo_example/

If I run the below notebook, or with any other backtestid in the notebook for that matter, get the below error. Only the cumulative returns graphs are displayed. All other graphs are "empty".

"TypeError: cannot concatenate 'str' and 'zipline.assets._assets.Equity' objects"

##########

Get backtest object

bt = get_backtest('55db2def35e3b00d9986aa99')

Create all tear sheets

bt.create_full_tear_sheet()
bt.create_bayesian_tear_sheet(live_start_date='2011-1-1')

#################

Alpha, thanks for reporting. That's a bug alright. It's an easy fix and I think we can resolve this soon. I'll make a note once we did here.

Hey Thomas,

Same error here :) Empty Charts

TypeError: cannot concatenate 'str' and 'zipline.assets._assets.Equity' objects

Ah yeah I am getting that too :/

I think I have a fix, just have to get it merged and pushed to the site. Hopefully we can get there Monday.

Hi,
Can pyfolio be fed with a simple list of transactions? I'm amazed by the amount of interesting information it produces and would like to use it to analyze my own portfolio from the past 3 years. I've been trading manually according to no fixed strategy, but I do have a list of all transactions.

Thanks,
Dave

I wonder whether recent changes in zipline may have altered things? In the research environment I tried to access my Q IDE generate backtest in the approved manner:

# Get backtest object  
bt = get_backtest('xxxxxxxxxxxxxxxxx')

# Create all tear sheets  
bt.create_full_tear_sheet()  

and got the following error messages in my on-line research environment iPython Notebook:

100% Time: 0:00:44|###########################################################|  
Entire data start date: 2003-01-06  
Entire data end date: 2015-11-16


Backtest Months: 154  
                   Backtest  
annual_return          0.08  
annual_volatility      0.11  
sharpe_ratio           0.79  
calmar_ratio           0.39  
stability              0.94  
max_drawdown          -0.22  
omega_ratio            1.14  
sortino_ratio          1.10  
information_ratio       NaN  
skewness              -0.42  
kurtosis               2.04  
alpha                  0.05  
beta                   0.37  
---------------------------------------------------------------------------  
TypeError                                 Traceback (most recent call last)  
<ipython-input-5-5f6916bf2a08> in <module>()  
      3  
      4 # Create all tear sheets  
----> 5 bt.create_full_tear_sheet()

/home/qexec/src/qexec_repo/qexec/research/results.py in create_full_tear_sheet(self, benchmark_rets, live_start_date, bayesian, hide_positions, cone_std)
    867             bayesian=bayesian,  
    868             hide_positions=hide_positions,  
--> 869             cone_std=cone_std  
    870         )

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_full_tear_sheet(returns, positions, transactions, benchmark_rets, gross_lev, slippage, live_start_date, bayesian, hide_positions, sector_mappings, cone_std, set_context)
    148         cone_std=cone_std,  
    149         benchmark_rets=benchmark_rets,  
--> 150         set_context=set_context)  
    151  
    152     create_interesting_times_tear_sheet(returns,

/usr/local/lib/python2.7/dist-packages/pyfolio/plotting.pyc in call_w_context(*args, **kwargs)
     42         if set_context:  
     43             with context():  
---> 44                 return func(*args, **kwargs)  
     45         else:  
     46             return func(*args, **kwargs)

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_returns_tear_sheet(returns, live_start_date, cone_std, benchmark_rets, return_fig)
    273  
    274     plotting.plot_rolling_beta(  
--> 275         returns, benchmark_rets, ax=ax_rolling_beta)  
    276  
    277     plotting.plot_rolling_sharpe(

/usr/local/lib/python2.7/dist-packages/pyfolio/plotting.pyc in plot_rolling_beta(returns, factor_returns, legend_loc, ax, **kwargs)
    701     ax.yaxis.set_major_formatter(FuncFormatter(y_axis_formatter))  
    702  
--> 703     ax.set_title("Rolling Portfolio Beta to " + factor_returns.name)  
    704     ax.set_ylabel('Beta')  
    705     rb_1 = timeseries.rolling_beta(

TypeError: cannot concatenate 'str' and 'zipline.assets._assets.Equity' objects

Anthony, we're getting close to fixing this (there were a few issues to resolve). Sorry it's been broken for so long.

Wonderful, thanks Thomas

Anthony: Can you retry?

I still get many errors here.

/usr/local/lib/python2.7/dist-packages/pyfolio/utils.py:149: UserWarning: Could not update cache /usr/local/lib/python2.7/dist-packages/pyfolio/data/factors.csv.Exception: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pyfolio/data/factors.csv'
  UserWarning)  
---------------------------------------------------------------------------  
KeyError                                  Traceback (most recent call last)  
<ipython-input-1-22f6a207e698> in <module>()  
      3  
      4 # Create all tear sheets  
----> 5 bt.create_full_tear_sheet()

/home/qexec/src/qexec_repo/qexec/research/results.py in create_full_tear_sheet(self, benchmark_rets, live_start_date, bayesian, round_trips, hide_positions, cone_std)
    869             round_trips=round_trips,  
    870             hide_positions=hide_positions,  
--> 871             cone_std=cone_std  
    872         )

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_full_tear_sheet(returns, positions, transactions, benchmark_rets, gross_lev, slippage, live_start_date, sector_mappings, bayesian, round_trips, hide_positions, cone_std, set_context)
    158         cone_std=cone_std,  
    159         benchmark_rets=benchmark_rets,  
--> 160         set_context=set_context)  
    161  
    162     create_interesting_times_tear_sheet(returns,

/usr/local/lib/python2.7/dist-packages/pyfolio/plotting.pyc in call_w_context(*args, **kwargs)
     44         if set_context:  
     45             with context():  
---> 46                 return func(*args, **kwargs)  
     47         else:  
     48             return func(*args, **kwargs)

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_returns_tear_sheet(returns, live_start_date, cone_std, benchmark_rets, return_fig)
    294  
    295     plotting.plot_rolling_fama_french(  
--> 296         returns, ax=ax_rolling_risk)  
    297  
    298     # Drawdowns

/usr/local/lib/python2.7/dist-packages/pyfolio/plotting.pyc in plot_rolling_fama_french(returns, factor_returns, rolling_window, legend_loc, ax, **kwargs)
    147         returns,  
    148         factor_returns=factor_returns,  
--> 149         rolling_window=rolling_window)  
    150  
    151     rolling_beta.plot(alpha=0.7, ax=ax, **kwargs)

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in rolling_fama_french(returns, factor_returns, rolling_window)
    695  
    696     return rolling_beta(returns, factor_returns,  
--> 697                         rolling_window=rolling_window)  
    698  
    699 

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in rolling_beta(returns, factor_returns, rolling_window)
    652         # Apply column-wise  
    653         return factor_returns.apply(partial(rolling_beta, returns),  
--> 654                                     rolling_window=rolling_window)  
    655     else:  
    656         out = pd.Series(index=returns.index)

/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in apply(self, func, axis, broadcast, raw, reduce, args, **kwds)
   3707                     if reduce is None:  
   3708                         reduce = True  
-> 3709                     return self._apply_standard(f, axis, reduce=reduce)  
   3710             else:  
   3711                 return self._apply_broadcast(f, axis)

/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in _apply_standard(self, func, axis, ignore_failures, reduce)
   3797             try:  
   3798                 for i, v in enumerate(series_gen):  
-> 3799                     results[i] = func(v)  
   3800                     keys.append(v.name)  
   3801             except Exception as e:

/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in <lambda>(x)
   3685         axis = self._get_axis_number(axis)  
   3686         if kwds or args and not isinstance(func, np.ufunc):  
-> 3687             f = lambda x: func(x, *args, **kwds)  
   3688         else:  
   3689             f = func

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in rolling_beta(returns, factor_returns, rolling_window)
    659             out.loc[end] = calc_alpha_beta(  
    660                 returns.loc[beg:end],  
--> 661                 factor_returns.loc[beg:end])[1]  
    662  
    663         return out

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in calc_alpha_beta(returns, factor_returns)
    721  
    722     ret_index = returns.index  
--> 723     beta, alpha = sp.stats.linregress(factor_returns.loc[ret_index].values,  
    724                                       returns.values)[:2]  
    725 

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in __getitem__(self, key)
   1178             return self._getitem_tuple(key)  
   1179         else:  
-> 1180             return self._getitem_axis(key, axis=0)  
   1181  
   1182     def _getitem_axis(self, key, axis=0):

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis)
   1312                     raise ValueError('Cannot index with multidimensional key')  
   1313  
-> 1314                 return self._getitem_iterable(key, axis=axis)  
   1315  
   1316             # nested tuple slicing

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _getitem_iterable(self, key, axis)
    922     def _getitem_iterable(self, key, axis=0):  
    923         if self._should_validate_iterable(axis):  
--> 924             self._has_valid_type(key, axis)  
    925  
    926         labels = self.obj._get_axis(axis)

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _has_valid_type(self, key, axis)
   1260  
   1261                 raise KeyError("None of [%s] are in the [%s]" %  
-> 1262                                (key, self.obj._get_axis_name(axis)))  
   1263  
   1264             return True

KeyError: ("None of [DatetimeIndex(['2002-04-04', '2002-04-05', '2002-04-08', '2002-04-09',\n               '2002-04-10', '2002-04-11', '2002-04-12', '2002-04-15',\n               '2002-04-16', '2002-04-17', \n               ...\n               '2002-09-19', '2002-09-20', '2002-09-23', '2002-09-24',\n               '2002-09-25', '2002-09-26', '2002-09-27', '2002-09-30',\n               '2002-10-01', '2002-10-02'],\n              dtype='datetime64[ns]', length=127, freq=None, tz='UTC')] are in the [index]", u'occurred at index SMB')  

OK, I've been seeing that one too. We're investigating.

Thomas
I'm afraid not......


100% Time: 0:00:46|###########################################################|  
Entire data start date: 2003-01-06  
Entire data end date: 2015-11-16


Backtest Months: 154  
                   Backtest  
annual_return          0.08  
annual_volatility      0.11  
sharpe_ratio           0.79  
calmar_ratio           0.39  
stability              0.94  
max_drawdown          -0.22  
omega_ratio            1.14  
sortino_ratio          1.10  
skewness              -0.42  
kurtosis               2.04  
information_ratio      0.00  
alpha                  0.05  
beta                   0.37  
/usr/local/lib/python2.7/dist-packages/pyfolio/utils.py:149: UserWarning: Could not update cache /usr/local/lib/python2.7/dist-packages/pyfolio/data/factors.csv.Exception: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pyfolio/data/factors.csv'
  UserWarning)  
---------------------------------------------------------------------------  
KeyError                                  Traceback (most recent call last)  
<ipython-input-1-5f6916bf2a08> in <module>()  
      3  
      4 # Create all tear sheets  
----> 5 bt.create_full_tear_sheet()

/home/qexec/src/qexec_repo/qexec/research/results.py in create_full_tear_sheet(self, benchmark_rets, live_start_date, bayesian, round_trips, hide_positions, cone_std)
    869             round_trips=round_trips,  
    870             hide_positions=hide_positions,  
--> 871             cone_std=cone_std  
    872         )

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_full_tear_sheet(returns, positions, transactions, benchmark_rets, gross_lev, slippage, live_start_date, sector_mappings, bayesian, round_trips, hide_positions, cone_std, set_context)
    158         cone_std=cone_std,  
    159         benchmark_rets=benchmark_rets,  
--> 160         set_context=set_context)  
    161  
    162     create_interesting_times_tear_sheet(returns,

/usr/local/lib/python2.7/dist-packages/pyfolio/plotting.pyc in call_w_context(*args, **kwargs)
     44         if set_context:  
     45             with context():  
---> 46                 return func(*args, **kwargs)  
     47         else:  
     48             return func(*args, **kwargs)

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_returns_tear_sheet(returns, live_start_date, cone_std, benchmark_rets, return_fig)
    294  
    295     plotting.plot_rolling_fama_french(  
--> 296         returns, ax=ax_rolling_risk)  
    297  
    298     # Drawdowns

/usr/local/lib/python2.7/dist-packages/pyfolio/plotting.pyc in plot_rolling_fama_french(returns, factor_returns, rolling_window, legend_loc, ax, **kwargs)
    147         returns,  
    148         factor_returns=factor_returns,  
--> 149         rolling_window=rolling_window)  
    150  
    151     rolling_beta.plot(alpha=0.7, ax=ax, **kwargs)

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in rolling_fama_french(returns, factor_returns, rolling_window)
    695  
    696     return rolling_beta(returns, factor_returns,  
--> 697                         rolling_window=rolling_window)  
    698  
    699 

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in rolling_beta(returns, factor_returns, rolling_window)
    652         # Apply column-wise  
    653         return factor_returns.apply(partial(rolling_beta, returns),  
--> 654                                     rolling_window=rolling_window)  
    655     else:  
    656         out = pd.Series(index=returns.index)

/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in apply(self, func, axis, broadcast, raw, reduce, args, **kwds)
   3707                     if reduce is None:  
   3708                         reduce = True  
-> 3709                     return self._apply_standard(f, axis, reduce=reduce)  
   3710             else:  
   3711                 return self._apply_broadcast(f, axis)

/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in _apply_standard(self, func, axis, ignore_failures, reduce)
   3797             try:  
   3798                 for i, v in enumerate(series_gen):  
-> 3799                     results[i] = func(v)  
   3800                     keys.append(v.name)  
   3801             except Exception as e:

/usr/local/lib/python2.7/dist-packages/pandas/core/frame.pyc in <lambda>(x)
   3685         axis = self._get_axis_number(axis)  
   3686         if kwds or args and not isinstance(func, np.ufunc):  
-> 3687             f = lambda x: func(x, *args, **kwds)  
   3688         else:  
   3689             f = func

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in rolling_beta(returns, factor_returns, rolling_window)
    659             out.loc[end] = calc_alpha_beta(  
    660                 returns.loc[beg:end],  
--> 661                 factor_returns.loc[beg:end])[1]  
    662  
    663         return out

/usr/local/lib/python2.7/dist-packages/pyfolio/timeseries.pyc in calc_alpha_beta(returns, factor_returns)
    721  
    722     ret_index = returns.index  
--> 723     beta, alpha = sp.stats.linregress(factor_returns.loc[ret_index].values,  
    724                                       returns.values)[:2]  
    725 

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in __getitem__(self, key)
   1178             return self._getitem_tuple(key)  
   1179         else:  
-> 1180             return self._getitem_axis(key, axis=0)  
   1181  
   1182     def _getitem_axis(self, key, axis=0):

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _getitem_axis(self, key, axis)
   1312                     raise ValueError('Cannot index with multidimensional key')  
   1313  
-> 1314                 return self._getitem_iterable(key, axis=axis)  
   1315  
   1316             # nested tuple slicing

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _getitem_iterable(self, key, axis)
    922     def _getitem_iterable(self, key, axis=0):  
    923         if self._should_validate_iterable(axis):  
--> 924             self._has_valid_type(key, axis)  
    925  
    926         labels = self.obj._get_axis(axis)

/usr/local/lib/python2.7/dist-packages/pandas/core/indexing.pyc in _has_valid_type(self, key, axis)
   1260  
   1261                 raise KeyError("None of [%s] are in the [%s]" %  
-> 1262                                (key, self.obj._get_axis_name(axis)))  
   1263  
   1264             return True

KeyError: ("None of [DatetimeIndex(['2003-01-06', '2003-01-07', '2003-01-08', '2003-01-09',\n               '2003-01-10', '2003-01-13', '2003-01-14', '2003-01-15',\n               '2003-01-16', '2003-01-17', \n               ...\n               '2003-06-24', '2003-06-25', '2003-06-26', '2003-06-27',\n               '2003-06-30', '2003-07-01', '2003-07-02', '2003-07-03',\n               '2003-07-07', '2003-07-08'],\n              dtype='datetime64[ns]', length=127, freq=None, tz='UTC')] are in the [index]", u'occurred at index SMB')

If it helps (probably not!) I ran it over exactly the same back test.

Would like an example running one part only such as Underwater Plot (focused quick compare of several algos).

garyha, see the attached notebook. It doesn't include the comparison yet which is something we still have to work out.

Perfect, appreciated.

To summarize the commands in case it might help someone in a search, example for running individual plotting elements of a tearsheet:

backtest = get_backtest('5627b09e3dfd591120ebdc71')
import pyfolio as pf
pf.plotting.plot_drawdown_periods(backtest.daily_performance.returns)
pf.plotting.plot_drawdown_underwater(backtest.daily_performance.returns)

I think it means the pattern might work for all of these:
plot_annual_returns()
plot_drawdown_periods()
plot_drawdown_underwater()
plot_holdings()
plot_monthly_returns_dist()
plot_monthly_returns_heatmap()
plot_monthly_returns_timeseries()
plot_rolling_sharpe()
show_worst_drawdown_periods()

(For others, there would be some additional required arguments that could surely also be accessed).

Yes, that's exactly right. Some analyses require positions and/or transactions which can be found in backtest.positions and backtest.transactions.

Amazing!

Hi, this is a great work!
Is it possible to limit the period of data to say 12 months? E.g when you want to look at a mutual fund performance in a local environment using pyfolio?
I am curious if instead of looking at multiple years performance, you can just look at one year period.

still doesn't work for long term backtest analysis .

Any update on this? The full analysis still generates errors. I miss this tool!

OK, pyfolio should be fixed and updated in research. Sorry this took a while! Can someone try and report back if it's working or not?

It's working! Thank you very much.

The Bayesian analysis tearsheet doesn't seem to update however.

EDIT: The Bayesian analysis is working now. Cool!

Thanks for reporting back, Charles. Glad it's working.

This is now release 0.4. You can see here what new features we added to pyfolio: http://quantopian.github.io/pyfolio/whatsnew/#v040-dec-10-2015 Unfortunately, there's still a problem with the round-trip analysis on research. I have a fix but it could still be a little while until it makes it to production.

Hi, tried to import .csv returns data but got the following error: Cannot compare tz-naive and tz-aware timestamps
my data index is:
DatetimeIndex(['2013-06-18', '2013-06-19', '2013-06-20', '2013-06-21',
'2013-06-24', '2013-06-25', '2013-06-26', '2013-06-27',
'2013-06-28', '2013-07-01',
...
'2015-12-02', '2015-12-03', '2015-12-04', '2015-12-07',
'2015-12-08', '2015-12-09', '2015-12-10', '2015-12-11',
'2015-12-14', '2015-12-15'],
dtype='datetime64[ns]', name=u'DATE', length=651, freq=None)

Hi Esteban,
This snippet of code has helped get my imported returns series working well with pyfolio

# import my returns  
returns = pandas.read_csv('myfile.csv')

returns.index = returns.index.normalize()  
if returns.index.tzinfo is None:  
    returns.index = returns.index.tz_localize('UTC')  
Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Great, tks Justin. had just found out the same!

any link I might find documentation?

I'm trying to run the Rolling Fama-French analysis on a .csv returns file but all charts are returned empty. any idea why this might happen?

Esteban: Can you open an issue on our tracker with some more information: https://github.com/quantopian/pyfolio/issues

EZ: It can take awhile, the indication of busy working is the asterisk next to the cell, like ... In [*]: ... and turns back to a number again when done.

Thomas, when do you think the round-trip fix will be available in research?

Michael, it's hard to say unfortunately but I'll see if there's a way to speed up the process.

Michael, until we fix it, I created a NB that does the next generation of this analysis stand-alone: https://www.quantopian.com/posts/round-trip-trade-analysis Hope it's useful!

I am hoping to see Profit by Symbol Traded, and Days Held by Symbol. When do you think these might be included as runs under the Top Securities Held section? I understand this request is in the Que. Thank you!

John, I responded over in https://www.quantopian.com/posts/round-trip-trade-analysis.

The new round-trip notebook looks great, Thomas -- thank you!

I am suddenly getting error messages, which follow, in the Bayesian section of tearsheet output. If you could help me debug this I would appreciate that very much. The sheet produces reports up to the Bayesian graphs, so I know the ID is working.

Thank you!

I use the following cell code: bt = get_backtest('...my backtest ID')
bt.create_full_tear_sheet(bayesian=True, live_start_date='2015-1-1')

John Jay

Running T model

collect2: fatal error: cannot find 'ld'
compilation terminated.

00001 #include
00002 #include
00003 #include "theano_mod_helper.h"
00004 #include
00005 #include
00006 #include
00007 #include
00008 #include
00009 //////////////////////
00010 //// Support Code
00011 //////////////////////
00012
00013
00014 namespace {
00015 struct struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef {
00016 PyObject* __ERROR;
00017
00018 PyObject* storage_V3;
00019 PyObject* storage_V5;
00020 PyObject* storage_V1;
00021
00022
00023 __struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef() {
00024 // This is only somewhat safe because we:
00025 // 1) Are not a virtual class
00026 // 2) Do not use any virtual classes in the members
00027 // 3) Deal with mostly POD and pointers
00028
00029 // If this changes, we would have to revise this, but for
00030 // now I am tired of chasing segfaults because
00031 // initialization code had an error and some pointer has
00032 // a junk value.
00033 memset(this, 0, sizeof(*this));
00034 }
00035 ~
struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef(void) {
00036 cleanup();
00037 }
00038
00039 int init(PyObject* ERROR, PyObject* storage_V3, PyObject* storage_V5, PyObject* storage_V1) {
00040 Py_XINCREF(storage_V3);
00041 Py_XINCREF(storage_V5);
00042 Py_XINCREF(storage_V1);
00043 this->storage_V3 = storage_V3;
00044 this->storage_V5 = storage_V5;
00045 this->storage_V1 = storage_V1;
00046
00047
00048
00049
00050
00051 this->
ERROR = ERROR;
00052 return 0;
00053 }
00054 void cleanup(void) {
00055 __label_1:
00056
00057 double __DUMMY_1;
00058 __label_3:
00059
00060 double __DUMMY_3;
00061 __label_5:
00062
00063 double __DUMMY_5;
00064 __label_8:
00065
00066 double __DUMMY_8;
00067
00068 Py_XDECREF(this->storage_V3);
00069 Py_XDECREF(this->storage_V5);
00070 Py_XDECREF(this->storage_V1);
00071 }
00072 int run(void) {
00073 int __failure = 0;
00074
00075 PyObject* py_V1;
00076
00077 PyArrayObject* V1;
00078
00079 typedef npy_float64 dtype_V1;
00080
00081 PyObject* py_V3;
00082
00083 PyArrayObject* V3;
00084
00085 typedef npy_float64 dtype_V3;
00086
00087 PyObject* py_V5;
00088
00089 PyArrayObject* V5;
00090
00091 typedef npy_int8 dtype_V5;
00092
00093 {
00094
00095 py_V1 = PyList_GET_ITEM(storage_V1, 0);
00096 {Py_XINCREF(py_V1);}
00097
00098 if (py_V1 == Py_None)
00099 {
00100
00101 V1 = NULL;
00102
00103 }
00104 else
00105 {
00106
00107 V1 = NULL;
00108 if (py_V1 == Py_None) {
00109 // We can either fail here or set V1 to NULL and rely on Ops
00110 // using tensors to handle the NULL case, but if they fail to do so
00111 // they'll end up with nasty segfaults, so this is public service.
00112 PyErr_SetString(PyExc_ValueError, "expected an ndarray, not None");
00113 {
00114 __failure = 2;
00115 if (!PyErr_Occurred()) {
00116 PyErr_SetString(PyExc_RuntimeError,
00117 "Unexpected error in an Op's C code. "
00118 "No Python exception was set.");
00119 }
00120 goto __label_2;}
00121 }
00122 if (!PyArray_Check(py_V1)) {
00123 PyErr_SetString(PyExc_ValueError, "expected an ndarray");
00124 {
00125 __failure = 2;
00126 if (!PyErr_Occurred()) {
00127 PyErr_SetString(PyExc_RuntimeError,
00128 "Unexpected error in an Op's C code. "
00129 "No Python exception was set.");
00130 }
00131 goto __label_2;}
00132 }
00133 // We expect NPY_FLOAT64
00134 if (!PyArray_ISALIGNED((PyArrayObject*) py_V1)) {
00135 PyArrayObject * tmp = (PyArrayObject*) py_V1;
00136 PyErr_Format(PyExc_NotImplementedError,
00137 "expected an aligned array of type %ld "
00138 "(NPY_FLOAT64), got non-aligned array of type %ld"
00139 " with %ld dimensions, with 3 last dims "
00140 "%ld, %ld, %ld"
00141 " and 3 last strides %ld %ld, %ld.",
00142 (long int) NPY_FLOAT64,
00143 (long int) PyArray_TYPE((PyArrayObject*) py_V1),
00144 (long int) PyArray_NDIM(tmp),
00145 (long int) PyArray_NDIM(tmp) >= 3 ?
00146 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-3] : -1,
00147 (long int) PyArray_NDIM(tmp) >= 2 ?
00148 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-2] : -1,
00149 (long int) PyArray_NDIM(tmp) >= 1 ?
00150 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-1] : -1,
00151 (long int) PyArray_NDIM(tmp) >= 3 ?
00152 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-3] : -1,
00153 (long int) PyArray_NDIM(tmp) >= 2 ?
00154 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-2] : -1,
00155 (long int) PyArray_NDIM(tmp) >= 1 ?
00156 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-1] : -1
00157 );
00158 {
00159 __failure = 2;
00160 if (!PyErr_Occurred()) {
00161 PyErr_SetString(PyExc_RuntimeError,
00162 "Unexpected error in an Op's C code. "
00163 "No Python exception was set.");
00164 }
00165 goto __label_2;}
00166 }
00167 // This is a TypeError to be consistent with DEBUG_MODE
00168 // Note: DEBUG_MODE also tells the name of the container
00169 if (PyArray_TYPE((PyArrayObject*) py_V1) != NPY_FLOAT64) {
00170 PyErr_Format(PyExc_TypeError,
00171 "expected type_num %d (NPY_FLOAT64) got %d",
00172 NPY_FLOAT64, PyArray_TYPE((PyArrayObject*) py_V1));
00173 {
00174 __failure = 2;
00175 if (!PyErr_Occurred()) {
00176 PyErr_SetString(PyExc_RuntimeError,
00177 "Unexpected error in an Op's C code. "
00178 "No Python exception was set.");
00179 }
00180 goto __label_2;}
00181 }
00182
00183 V1 = (PyArrayObject*)(py_V1);
00184 Py_XINCREF(V1);
00185
00186 }
00187
00188 {
00189
00190 py_V3 = PyList_GET_ITEM(storage_V3, 0);
00191 {Py_XINCREF(py_V3);}
00192
00193 V3 = NULL;
00194 if (py_V3 == Py_None) {
00195 // We can either fail here or set V3 to NULL and rely on Ops
00196 // using tensors to handle the NULL case, but if they fail to do so
00197 // they'll end up with nasty segfaults, so this is public service.
00198 PyErr_SetString(PyExc_ValueError, "expected an ndarray, not None");
00199 {
00200 __failure = 4;
00201 if (!PyErr_Occurred()) {
00202 PyErr_SetString(PyExc_RuntimeError,
00203 "Unexpected error in an Op's C code. "
00204 "No Python exception was set.");
00205 }
00206 goto __label_4;}
00207 }
00208 if (!PyArray_Check(py_V3)) {
00209 PyErr_SetString(PyExc_ValueError, "expected an ndarray");
00210 {
00211 __failure = 4;
00212 if (!PyErr_Occurred()) {
00213 PyErr_SetString(PyExc_RuntimeError,
00214 "Unexpected error in an Op's C code. "
00215 "No Python exception was set.");
00216 }
00217 goto __label_4;}
00218 }
00219 // We expect NPY_FLOAT64
00220 if (!PyArray_ISALIGNED((PyArrayObject*) py_V3)) {
00221 PyArrayObject * tmp = (PyArrayObject*) py_V3;
00222 PyErr_Format(PyExc_NotImplementedError,
00223 "expected an aligned array of type %ld "
00224 "(NPY_FLOAT64), got non-aligned array of type %ld"
00225 " with %ld dimensions, with 3 last dims "
00226 "%ld, %ld, %ld"
00227 " and 3 last strides %ld %ld, %ld.",
00228 (long int) NPY_FLOAT64,
00229 (long int) PyArray_TYPE((PyArrayObject*) py_V3),
00230 (long int) PyArray_NDIM(tmp),
00231 (long int) PyArray_NDIM(tmp) >= 3 ?
00232 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-3] : -1,
00233 (long int) PyArray_NDIM(tmp) >= 2 ?
00234 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-2] : -1,
00235 (long int) PyArray_NDIM(tmp) >= 1 ?
00236 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-1] : -1,
00237 (long int) PyArray_NDIM(tmp) >= 3 ?
00238 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-3] : -1,
00239 (long int) PyArray_NDIM(tmp) >= 2 ?
00240 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-2] : -1,
00241 (long int) PyArray_NDIM(tmp) >= 1 ?
00242 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-1] : -1
00243 );
00244 {
00245 __failure = 4;
00246 if (!PyErr_Occurred()) {
00247 PyErr_SetString(PyExc_RuntimeError,
00248 "Unexpected error in an Op's C code. "
00249 "No Python exception was set.");
00250 }
00251 goto __label_4;}
00252 }
00253 // This is a TypeError to be consistent with DEBUG_MODE
00254 // Note: DEBUG_MODE also tells the name of the container
00255 if (PyArray_TYPE((PyArrayObject*) py_V3) != NPY_FLOAT64) {
00256 PyErr_Format(PyExc_TypeError,
00257 "expected type_num %d (NPY_FLOAT64) got %d",
00258 NPY_FLOAT64, PyArray_TYPE((PyArrayObject*) py_V3));
00259 {
00260 __failure = 4;
00261 if (!PyErr_Occurred()) {
00262 PyErr_SetString(PyExc_RuntimeError,
00263 "Unexpected error in an Op's C code. "
00264 "No Python exception was set.");
00265 }
00266 goto __label_4;}
00267 }
00268
00269 V3 = (PyArrayObject*)(py_V3);
00270 Py_XINCREF(V3);
00271
00272 {
00273
00274 py_V5 = PyList_GET_ITEM(storage_V5, 0);
00275 {Py_XINCREF(py_V5);}
00276
00277 V5 = NULL;
00278 if (py_V5 == Py_None) {
00279 // We can either fail here or set V5 to NULL and rely on Ops
00280 // using tensors to handle the NULL case, but if they fail to do so
00281 // they'll end up with nasty segfaults, so this is public service.
00282 PyErr_SetString(PyExc_ValueError, "expected an ndarray, not None");
00283 {
00284 __failure = 6;
00285 if (!PyErr_Occurred()) {
00286 PyErr_SetString(PyExc_RuntimeError,
00287 "Unexpected error in an Op's C code. "
00288 "No Python exception was set.");
00289 }
00290 goto __label_6;}
00291 }
00292 if (!PyArray_Check(py_V5)) {
00293 PyErr_SetString(PyExc_ValueError, "expected an ndarray");
00294 {
00295 __failure = 6;
00296 if (!PyErr_Occurred()) {
00297 PyErr_SetString(PyExc_RuntimeError,
00298 "Unexpected error in an Op's C code. "
00299 "No Python exception was set.");
00300 }
00301 goto __label_6;}
00302 }
00303 // We expect NPY_INT8
00304 if (!PyArray_ISALIGNED((PyArrayObject*) py_V5)) {
00305 PyArrayObject * tmp = (PyArrayObject*) py_V5;
00306 PyErr_Format(PyExc_NotImplementedError,
00307 "expected an aligned array of type %ld "
00308 "(NPY_INT8), got non-aligned array of type %ld"
00309 " with %ld dimensions, with 3 last dims "
00310 "%ld, %ld, %ld"
00311 " and 3 last strides %ld %ld, %ld.",
00312 (long int) NPY_INT8,
00313 (long int) PyArray_TYPE((PyArrayObject*) py_V5),
00314 (long int) PyArray_NDIM(tmp),
00315 (long int) PyArray_NDIM(tmp) >= 3 ?
00316 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-3] : -1,
00317 (long int) PyArray_NDIM(tmp) >= 2 ?
00318 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-2] : -1,
00319 (long int) PyArray_NDIM(tmp) >= 1 ?
00320 PyArray_DIMS(tmp)[PyArray_NDIM(tmp)-1] : -1,
00321 (long int) PyArray_NDIM(tmp) >= 3 ?
00322 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-3] : -1,
00323 (long int) PyArray_NDIM(tmp) >= 2 ?
00324 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-2] : -1,
00325 (long int) PyArray_NDIM(tmp) >= 1 ?
00326 PyArray_STRIDES(tmp)[PyArray_NDIM(tmp)-1] : -1
00327 );
00328 {
00329 __failure = 6;
00330 if (!PyErr_Occurred()) {
00331 PyErr_SetString(PyExc_RuntimeError,
00332 "Unexpected error in an Op's C code. "
00333 "No Python exception was set.");
00334 }
00335 goto __label_6;}
00336 }
00337 // This is a TypeError to be consistent with DEBUG_MODE
00338 // Note: DEBUG_MODE also tells the name of the container
00339 if (PyArray_TYPE((PyArrayObject*) py_V5) != NPY_INT8) {
00340 PyErr_Format(PyExc_TypeError,
00341 "expected type_num %d (NPY_INT8) got %d",
00342 NPY_INT8, PyArray_TYPE((PyArrayObject*) py_V5));
00343 {
00344 __failure = 6;
00345 if (!PyErr_Occurred()) {
00346 PyErr_SetString(PyExc_RuntimeError,
00347 "Unexpected error in an Op's C code. "
00348 "No Python exception was set.");
00349 }
00350 goto __label_6;}
00351 }
00352
00353 V5 = (PyArrayObject*)(py_V5);
00354 Py_XINCREF(V5);
00355
00356 {
00357 // Op class Elemwise
00358
00359 npy_float64* V3_iter;
00360
00361 npy_int8* V5_iter;
00362
00363
00364
00365 npy_float64* V1_iter;
00366
00367 {
00368 npy_intp dims[0];
00369 //npy_intp* dims = (npy_intp*)malloc(0 * sizeof(npy_intp));
00370
00371 if (!V1) {
00372 V1 = (PyArrayObject*)PyArray_EMPTY(0, dims,
00373 NPY_FLOAT64,
00374 0);
00375 }
00376 else {
00377 PyArray_Dims new_dims;
00378 new_dims.len = 0;
00379 new_dims.ptr = dims;
00380 PyObject* success = PyArray_Resize(V1, &new_dims, 0, NPY_CORDER);
00381 if (!success) {
00382 // If we can't resize the ndarray we have we can allocate a new one.
00383 PyErr_Clear();
00384 Py_XDECREF(V1);
00385 V1 = (PyArrayObject*)PyArray_EMPTY(0, dims, NPY_FLOAT64, 0);
00386 }
00387 }
00388 if (!V1) {
00389 {
00390 __failure = 7;
00391 if (!PyErr_Occurred()) {
00392 PyErr_SetString(PyExc_RuntimeError,
00393 "Unexpected error in an Op's C code. "
00394 "No Python exception was set.");
00395 }
00396 goto __label_7;}
00397 }
00398 }
00399
00400
00401 {
00402
00403 V3_iter = (npy_float64*)(PyArray_DATA(V3));
00404 V5_iter = (npy_int8*)(PyArray_DATA(V5));
00405 V1_iter = (npy_float64*)(PyArray_DATA(V1));
00406
00407 npy_float64& V3_i = *V3_iter;
00408 npy_int8& V5_i = *V5_iter;
00409 npy_float64& V1_i = *V1_iter;
00410
00411 V1_i = V3_i - V5_i;
00412
00413 }
00414 __label_7:
00415
00416 double __DUMMY_7;
00417
00418 }
00419 __label_6:
00420
00421 if (V5) {
00422 Py_XDECREF(V5);
00423 }
00424
00425 {Py_XDECREF(py_V5);}
00426
00427 double __DUMMY_6;
00428
00429 }
00430 __label_4:
00431
00432 if (V3) {
00433 Py_XDECREF(V3);
00434 }
00435
00436 {Py_XDECREF(py_V3);}
00437
00438 double __DUMMY_4;
00439
00440 }
00441 __label_2:
00442
00443 if (!
failure) {
00444
00445 {Py_XDECREF(py_V1);}
00446 if (!V1) {
00447 Py_INCREF(Py_None);
00448 py_V1 = Py_None;
00449 }
00450 else if ((void*)py_V1 != (void*)V1) {
00451 py_V1 = (PyObject*)V1;
00452 }
00453
00454 {Py_XINCREF(py_V1);}
00455
00456 if (V1 && !PyArray_ISALIGNED((PyArrayObject*) py_V1)) {
00457 PyErr_Format(PyExc_NotImplementedError,
00458 "c_sync: expected an aligned array, got non-aligned array of type %ld"
00459 " with %ld dimensions, with 3 last dims "
00460 "%ld, %ld, %ld"
00461 " and 3 last strides %ld %ld, %ld.",
00462 (long int) PyArray_TYPE((PyArrayObject*) py_V1),
00463 (long int) PyArray_NDIM(V1),
00464 (long int) PyArray_NDIM(V1) >= 3 ?
00465 PyArray_DIMS(V1)[PyArray_NDIM(V1)-3] : -1,
00466 (long int) PyArray_NDIM(V1) >= 2 ?
00467 PyArray_DIMS(V1)[PyArray_NDIM(V1)-2] : -1,
00468 (long int) PyArray_NDIM(V1) >= 1 ?
00469 PyArray_DIMS(V1)[PyArray_NDIM(V1)-1] : -1,
00470 (long int) PyArray_NDIM(V1) >= 3 ?
00471 PyArray_STRIDES(V1)[PyArray_NDIM(V1)-3] : -1,
00472 (long int) PyArray_NDIM(V1) >= 2 ?
00473 PyArray_STRIDES(V1)[PyArray_NDIM(V1)-2] : -1,
00474 (long int) PyArray_NDIM(V1) >= 1 ?
00475 PyArray_STRIDES(V1)[PyArray_NDIM(V1)-1] : -1
00476 );
00477 {
00478 failure = 2;
00479 if (!PyErr_Occurred()) {
00480 PyErr_SetString(PyExc_RuntimeError,
00481 "Unexpected error in an Op's C code. "
00482 "No Python exception was set.");
00483 }
00484 goto __label_2;}
00485 }
00486
00487 PyObject* old = PyList_GET_ITEM(storage_V1, 0);
00488 {Py_XINCREF(py_V1);}
00489 PyList_SET_ITEM(storage_V1, 0, py_V1);
00490 {Py_XDECREF(old);}
00491 }
00492
00493 if (V1) {
00494 Py_XDECREF(V1);
00495 }
00496
00497 {Py_XDECREF(py_V1);}
00498
00499 double __DUMMY_2;
00500
00501 }
00502
00503
00504 if (
failure) {
00505 // When there is a failure, this code puts the exception
00506 // in ERROR.
00507 PyObject* err_type = NULL;
00508 PyObject* err_msg = NULL;
00509 PyObject* err_traceback = NULL;
00510 PyErr_Fetch(&err_type, &err_msg, &err_traceback);
00511 if (!err_type) {err_type = Py_None;Py_INCREF(Py_None);}
00512 if (!err_msg) {err_msg = Py_None; Py_INCREF(Py_None);}
00513 if (!err_traceback) {err_traceback = Py_None; Py_INCREF(Py_None);}
00514 PyObject* old_err_type = PyList_GET_ITEM(
ERROR, 0);
00515 PyObject* old_err_msg = PyList_GET_ITEM(ERROR, 1);
00516 PyObject* old_err_traceback = PyList_GET_ITEM(
ERROR, 2);
00517 PyList_SET_ITEM(ERROR, 0, err_type);
00518 PyList_SET_ITEM(
ERROR, 1, err_msg);
00519 PyList_SET_ITEM(ERROR, 2, err_traceback);
00520 {Py_XDECREF(old_err_type);}
00521 {Py_XDECREF(old_err_msg);}
00522 {Py_XDECREF(old_err_traceback);}
00523 }
00524 // The failure code is returned to index what code block failed.
00525 return __failure;
00526
00527 }
00528 };
00529 }
00530
00531
00532 static int __struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef_executor(
struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef* self) {
00533 return self->run();
00534 }
00535
00536 static void struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef_destructor(void* executor, void* self) {
00537 delete ((
struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef*)self);
00538 }
00539
00540 //////////////////////
00541 //// Functions
00542 //////////////////////
00543 static PyObject * instantiate(PyObject * self, PyObject argtuple) {
00544 assert(PyTuple_Check(argtuple));
00545 if (4 != PyTuple_Size(argtuple)){
00546 PyErr_Format(PyExc_TypeError, "Wrong number of arguments, expected 4, got %i", (int)PyTuple_Size(argtuple));
00547 return NULL;
00548 }
00549 __struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef
struct_ptr = new struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef();
00550 if (struct_ptr->init( PyTuple_GET_ITEM(argtuple, 0),PyTuple_GET_ITEM(argtuple, 1),PyTuple_GET_ITEM(argtuple, 2),PyTuple_GET_ITEM(argtuple, 3) ) != 0) {
00551 delete struct_ptr;
00552 return NULL;
00553 }
00554 PyObject* thunk = PyCObject_FromVoidPtrAndDesc((void*)(&
struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef_executor), struct_ptr, __struct_compiled_op_8251ea1b4836b28fc8365dcfb7aefbef_destructor);
00555 return thunk; }
00556
00557 //////////////////////
00558 //// Module init
00559 //////////////////////
00560 static PyMethodDef MyMethods[] = {
00561 {"instantiate", instantiate, METH_VARARGS, "undocumented"} ,
00562 {NULL, NULL, 0, NULL}
00563 };
00564 PyMODINIT_FUNC init8251ea1b4836b28fc8365dcfb7aefbef(void){
00565 import_array();
00566 (void) Py_InitModule("8251ea1b4836b28fc8365dcfb7aefbef", MyMethods);
00567 }
00568
Problem occurred during compilation with the command line below:

/usr/bin/g++ -shared -g -O3 -fno-math-errno -Wno-unused-label -Wno-unused-variable -Wno-write-strings -march=core-avx-i -mcx16 -msahf -mno-movbe -maes -mpclmul -mpopcnt -mno-abm -mno-lwp -mno-fma -mno-fma4 -mno-xop -mno-bmi -mno-bmi2 -mno-tbm -mavx -mno-avx2 -msse4.2 -msse4.1 -mno-lzcnt -mno-rtm -mno-hle -mrdrnd -mf16c -mfsgsbase -mno-rdseed -mno-prfchw -mno-adx -mfxsr -mxsave -mxsaveopt --param l1-cache-size=32 --param l1-cache-line-size=64 --param l2-cache-size=25600 -mtune=core-avx-i -D NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -m64 -fPIC -I/usr/local/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -I/usr/local/lib/python2.7/dist-packages/theano/gof -fvisibility=hidden -o /home/qexec-restricted/.theano/compiledir_Linux-4.2--generic-x86_64-with-Ubuntu-14.04-trusty-x86_64-2.7.11-64/tmpZbZ4V0/8251ea1b4836b28fc8365dcfb7aefbef.so /home/qexec-restricted/.theano/compiledir_Linux-4.2--generic-x86_64-with-Ubuntu-14.04-trusty-x86_64-2.7.11-64/tmpZbZ4V0/mod.cpp -L/usr/lib -lpython2.7

Exception Traceback (most recent call last)
in ()
1 bt = get_backtest('56c802ee21043c0df860d76a')
----> 2 bt.create_full_tear_sheet(bayesian=True, live_start_date='2015-1-1')

/build/src/qexec_repo/qexec/research/results.py in create_full_tear_sheet(self, benchmark_rets, live_start_date, bayesian, round_trips, hide_positions, cone_std) 992 round_trips=round_trips,
993 hide_positions=hide_positions,
--> 994 cone_std=cone_std
995 )

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_full_tear_sheet(returns, positions, transactions, benchmark_rets, gross_lev, slippage, live_start_date, sector_mappings, bayesian, round_trips, hide_positions, cone_std, set_context) 184 live_start_date=live_start_date,
185 benchmark_rets=benchmark_rets,
--> 186 set_context=set_context)
187
188

/usr/local/lib/python2.7/dist-packages/pyfolio/plotting.pyc in call_w_context(args, **kwargs) 44 if set_context:
45 with context():
---> 46 return func(
args, **kwargs)
47 else:
48 return func(*args, **kwargs)

/usr/local/lib/python2.7/dist-packages/pyfolio/tears.pyc in create_bayesian_tear_sheet(returns, benchmark_rets, live_start_date, samples, return_fig, stoch_vol) 744 trace_t, ppc_t = bayesian.run_model('t', df_train,
745 returns_test=df_test,
--> 746 samples=samples, ppc=True)
747 previous_time = timer("T model", previous_time)
748

/usr/local/lib/python2.7/dist-packages/pyfolio/bayesian.pyc in run_model(model, returns_train, returns_test, bmark, samples, ppc) 576 bmark, samples)
577 elif model == 't':
--> 578 model, trace = model_returns_t(returns_train, samples)
579 elif model == 'normal':
580 model, trace = model_returns_normal(returns_train, samples)

/usr/local/lib/python2.7/dist-packages/pyfolio/bayesian.pyc in model_returns_t(data, samples) 165
166 with pm.Model() as model:
--> 167 mu = pm.Normal('mean returns', mu=0, sd=.01, testval=data.mean())
168 sigma = pm.HalfCauchy('volatility', beta=1, testval=data.std())
169 nu = pm.Exponential('nu_minus_two', 1. / 10., testval=3.)

/usr/local/lib/python2.7/dist-packages/pymc3/distributions/distribution.pyc in new(cls, name, *args, **kwargs) 22 data = kwargs.pop('observed', None)
23 dist = cls.dist(*args, **kwargs)
---> 24 return model.Var(name, dist, data)
25 elif name is None:
26 return object.__new__(cls) #for pickle

/usr/local/lib/python2.7/dist-packages/pymc3/model.pyc in Var(self, name, dist, data) 224 if data is None:
225 if getattr(dist, "transform", None) is None:
--> 226 var = FreeRV(name=name, distribution=dist, model=self)
227 self.free_RVs.append(var)
228 else:

/usr/local/lib/python2.7/dist-packages/pymc3/model.pyc in init(self, type, owner, index, name, distribution, model) 436 self.tag.test_value = np.ones(
437 distribution.shape, distribution.dtype) * distribution.default()
--> 438 self.logp_elemwiset = distribution.logp(self)
439 self.model = model
440

/usr/local/lib/python2.7/dist-packages/pymc3/distributions/continuous.pyc in logp(self, value) 171
172 return bound(
--> 173 (-tau * (value - mu) ** 2 + log(tau / pi / 2.)) / 2.,
174 tau > 0,
175 sd > 0

/usr/local/lib/python2.7/dist-packages/theano/tensor/var.pyc in sub(self, other) 151 # and the return value in that case
152 try:
--> 153 return theano.tensor.basic.sub(self, other)
154 except (NotImplementedError, AsTensorError):
155 return NotImplemented

/usr/local/lib/python2.7/dist-packages/theano/gof/op.pyc in call(self, *inputs, **kwargs) 645 # compute output value once with test inputs to validate graph
646 thunk = node.op.make_thunk(node, storage_map, compute_map,
--> 647 no_recycling=[])
648 thunk.inputs = [storage_map[v] for v in node.inputs]
649 thunk.outputs = [storage_map[v] for v in node.outputs]

/usr/local/lib/python2.7/dist-packages/theano/tensor/elemwise.pyc in make_thunk(self, node, storage_map, compute_map, no_recycling) 813
814 return super(Elemwise, node_.op).make_thunk(node_, storage_map,
--> 815 compute_map, no_recycling)
816
817 def perform(self, node, inputs, output_storage):

/usr/local/lib/python2.7/dist-packages/theano/gof/op.pyc in make_thunk(self, node, storage_map, compute_map, no_recycling) 1166 self.update_self_openmp()
1167 return super(OpenMPOp, self).make_thunk(node, storage_map,
-> 1168 compute_map, no_recycling)
1169
1170

/usr/local/lib/python2.7/dist-packages/theano/gof/op.pyc in make_thunk(self, node, storage_map, compute_map, no_recycling) 930 try:
931 return self.make_c_thunk(node, storage_map, compute_map,
--> 932 no_recycling)
933 except (NotImplementedError, utils.MethodNotDefined):
934 logger.debug('Falling back on perform')

/usr/local/lib/python2.7/dist-packages/theano/gof/op.pyc in make_c_thunk(self, node, storage_map, compute_map, no_recycling) 848 logger.debug('Trying CLinker.make_thunk')
849 outputs = cl.make_thunk(input_storage=node_input_storage,
--> 850 output_storage=node_output_storage)
851 fill_storage, node_input_filters, node_output_filters = outputs
852

/usr/local/lib/python2.7/dist-packages/theano/gof/cc.pyc in make_thunk(self, input_storage, output_storage, storage_map, keep_lock) 1205 cthunk, in_storage, out_storage, error_storage = self.__compile__(
1206 input_storage, output_storage, storage_map,
-> 1207 keep_lock=keep_lock)
1208
1209 res = CThunk(cthunk, init_tasks, tasks, errorstorage)

/usr/local/lib/python2.7/dist-packages/theano/gof/cc.pyc in compile(self, input_storage, output_storage, storage_map, keep_lock) 1150 output_storage,
1151 storage_map,
-> 1152 keep_lock=keep_lock)
1153 return (thunk,
1154 [link.Container(input, storage) for input, storage in

/usr/local/lib/python2.7/dist-packages/theano/gof/cc.pyc in cthunk_factory(self, error_storage, in_storage, out_storage, storage_map, keep_lock) 1600 else:
1601 module = get_module_cache().module_from_key(
-> 1602 key=key, lnk=self, keep_lock=keep_lock)
1603
1604 vars = self.inputs + self.outputs + self.orphans

/usr/local/lib/python2.7/dist-packages/theano/gof/cmodule.pyc in module_from_key(self, key, lnk, keep_lock) 1172 try:
1173 location = dlimport_workdir(self.dirname)
-> 1174 module = lnk.compile_cmodule(location)
1175 name = module.__file__
1176 assert name.startswith(location)

/usr/local/lib/python2.7/dist-packages/theano/gof/cc.pyc in compile_cmodule(self, location) 1511 lib_dirs=self.lib_dirs(),
1512 libs=libs,
-> 1513 preargs=preargs)
1514 except Exception as e:
1515 e.args += (str(self.fgraph),)

/usr/local/lib/python2.7/dist-packages/theano/gof/cmodule.pyc in compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module, hide_symbols) 2185 # difficult to read.
2186 raise Exception('Compilation failed (return status=%s): %s' %
-> 2187 (status, compile_stderr.replace('\n', '. ')))
2188 elif config.cmodule.compilation_warning and compile_stderr:
2189 # Print errors just below the command line.

Exception: ("Compilation failed (return status=1): collect2: fatal error: cannot find 'ld'. compilation terminated.. ", '[Elemwise{sub,no_inplace}(mean returns, TensorConstant{0})]')

Hi John, we'll take a look at the compilation error follow up here when it's fixed

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

In the meantime, if you set bayesian=False, the rest should work fine.

Hi, Thomas, I've got the same exception: "raise KeyError..." as Anthony described above, have you fixed that bug?

I'm new to this. I use Quantopian's backtesting interface and I'm trying to add this reporting feature to my algorithm. Do I add "pip install pyfolio" to the top of my algorithm page?

I tried this but I get an error message. Any suggestions on what I should do?

Hi Adam,

You can run a pyfolio tearsheet from the research environment (click the research tab at the top of your screen). Copy the backtest ID you want to analyze from the url you see when on the full backtest screen (the last string of letters and numbers in something like https://www.quantopian.com/algorithms/568ac679adf9dd2d93000035/56ab9b0c02e9c012972af9c6) and paste it into the following command in a research notebook:

bt = get_backtest('your backtest id here')

bt.create_full_tear_sheet()  

Check out the Analyzing Backtest Results tutorial in your research environment tutorials folder for more details on pyfolio.

Andrew

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I keep getting this error !!

ChunkedEncodingError: ('Connection broken: IncompleteRead(419 bytes read, 93 more expected)', IncompleteRead(419 bytes read, 93 more expected))

Hi Thomas,

I am one step before newbie with Pyfolio.

Could you please help with following error: (not able to generate tear sheet with missing few last plots)

TypeError Traceback (most recent call last)
in ()
----> 1 pf.create_returns_tear_sheet(stock_rets)

/usr/local/lib/python3.4/dist-packages/pyfolio/plotting.py in call_w_context(args, **kwargs) 44 if set_context:
45 with context():
---> 46 return func(
args, **kwargs)
47 else:
48 return func(*args, **kwargs)

/usr/local/lib/python3.4/dist-packages/pyfolio/tears.py in create_returns_tear_sheet(returns, live_start_date, cone_std, benchmark_rets, return_fig) 311
312 print('\n')
--> 313 plotting.show_return_range(returns, df_weekly)
314
315 plotting.plot_monthly_returns_heatmap(returns, ax=ax_monthly_heatmap)

/usr/local/lib/python3.4/dist-packages/pyfolio/plotting.py in show_return_range(returns, df_weekly) 1050 '2-sigma returns weekly'])
1051
-> 1052 print(np.round(var_sigma, 3))
1053
1054

/usr/local/lib/python3.4/dist-packages/numpy/core/fromnumeric.py in round_(a, decimals, out) 2784 except AttributeError:
2785 return _wrapit(a, 'round', decimals, out)
-> 2786 return round(decimals, out)
2787
2788

TypeError: round() takes from 1 to 2 positional arguments but 3 were given

Thanks

Resolved!.. had to make changes on numpy/core/fromnumeric.py and pyfolio/plotting.py

I check pyfolio github code , the backtestResult object have create_round_trip_tear_sheet method and benchmark_rets object , but seems cannot work in Q Research platform ?

I cannot found the benchmark return from backtest object, so I try to create like following code, but still cannot succeed, please give some help.

In [81]:

import pyfolio as pf

benchmark_rets = pf.utils.get_symbol_rets('SPY')

pf.aggregate_returns(benchmark_rets, yearly)
import pyfolio as pf
​ benchmark_rets = pf.utils.get_symbol_rets('SPY')
​ pf.aggregate_returns(benchmark_rets, yearly)

Share

SecurityViolation Traceback (most recent call last)
in ()
3 benchmark_rets = pf.utils.get_symbol_rets('SPY')
4
----> 5 pf.aggregate_returns(benchmark_rets, yearly)

/build/src/qexec_repo/qexec/algo/safety.py in getattribute(self, attr) 137 mute_greylist=mute_greylist,
138 module=module,
--> 139 module_wrapper=self,
140 )
141 finally:

/build/src/qexec_repo/qexec/algo/safety.py in check_attribute_access(full_path, whitelist, name_blacklist, caller_fr, module, module_wrapper, msg, mute_greylist) 87 violations=[{'message': strip_qexec_module_path(error_message),
88 'lineno': caller_fr.f_lineno,
---> 89 'extra': extra}])
90
91

SecurityViolation: 0002 Security Violations(s): Accessing pf.aggregate_returns raised an AttributeError. No attributes with a similar name were found.

sorry, I have been through ```

import pyfolio as pf
benchmark_rets = pf.utils.get_symbol_rets('SPY')
pf.timeseries.cum_returns(benchmark_rets)

``` to solve my issue.

Thanks for this fantastic feature, it looks awesome. As a newbie to pyfolio, I'm a little bit lost when facing so many charts.
So is it possible for pyfolio to generate some smart suggestions? For example:
1. the maxdd is too high (50%) during 2008-01-01 to 2008-06-01
2. the algorithm seems not market neutral
3. the algorithm failed to beat benchmark during 2015
...

In my case, the pyfolio notebook, cloned from above, does snot generate any charts.

I am getting the below error on running the backtest with just SPY:


NoSuchSymbols Traceback (most recent call last)
in ()
----> 1 print bt.create_full_tear_sheet()

/build/src/qexec_repo/qexec/research/results.py in create_full_tear_sheet(self, benchmark_rets, live_start_date, bayesian, round_trips, hide_positions, cone_std) 1075 return pf.create_full_tear_sheet(
1076 self.tearsheet_params.returns,
-> 1077 positions=self._tearsheet_params.positions_val,
1078 transactions=self.
tearsheet_params.transactions,
1079 benchmark_rets=benchmark_rets,

/build/src/qexec_repo/qexec/research/pyfolio_compat.py in positions_val(self) 155 if self.positions_val is None:
156 self._positions_val = extract_pos(self.positions,
--> 157 self.ending
cash)
158 return self._positions_val
159

/build/src/qexec_repo/qexec/research/pyfolio_compat.py in extract_pos(positions, cash) 47
48 # Convert sids to symbols because that's what PyFolio expects.
---> 49 values.columns = make_unique_symbols(values.columns)
50
51 return values.join(cash, how='outer')

/build/src/qexec_repo/qexec/research/pyfolio_compat.py in make_unique_symbols(sids) 84
85 # Append each asset's sid to its symbol.
---> 86 for asset in symbols(unique_sids):
87 symbol_list.append('-'.join([asset.symbol, str(asset.sid)]))
88

/build/src/qexec_repo/qexec/research/api.pyc in symbols(symbols, symbol_reference_date, handle_missing) 598 )
599 if len(resolved_securities) == 0:
--> 600 raise NoSuchSymbols(symbols)
601
602 if symbols
is_scalar:

NoSuchSymbols: Failed to find securities matching []

Hi guys,
I'm pretty new to all of this. I just looked into analysis backtests which I think is a good thing. However, I encountered a problem: where can I get the backtest ID from? Thanks for any help in advance!

Hi Jannik, welcome! You can find the ID of a backtest in the URL of the backtest results page. The URL will be in the form:

https://www.quantopian.com/algorithms/<algorithm_id>/<backtest_id>  

Thanks Gus, I really appreciate your help! I tried it and sometimes I get the following message 'Failed to find securities matching []' (I think it's the same problem as Sameer pointed out before). For the backtest I'm using pipeline and set Q1500US() to be my universe; then I filtered using factors. Any ideas on how to solve this?

@Ryan, just a hunch, but does your algo hold any positions overnight? If not, maybe that is causing the error.

@ Ryan, No worries...

Hi guys, sorry for the confusion. The version of pyfolio we have on Research right now doesn't deal with intraday strategies well, but the latest version (0.7) we have released on GitHub does, see here. We'll try to get that version up on Research as soon as possible, I'll post here when it's ready.

The latest pyfolio is on Research now, which will attempt to plot intraday positions. For an explanation of what it's doing, you can see the link in my last post. Let me know how it works — It will only affect strategies that primarily hold positions intraday and not end-of-day; I would still call the intraday logic a feature in beta. You can always return to the previous functionality by passing estimate_intraday=False.

Great job on pyfolio Q team, most of my strategies are intraday so having pyfolio working properly on these strategies is awesome. One small issue I noticed is in /pyfolio/plotting.py on line 1777

fmt='{0:.2f}%')  

This produces output lke this:

Profitability (PnL / PnL total) per name pnl
JDST-45571 0.65%
JNUG-45570 0.35%

I think the line should be edited to read:
fmt='({0:.2f}*100)%')

Great, Dustin, glad it's working well for you. Thanks for finding that bug — I fixed it and it'll be in the next release.

Gus

I guess I'm doing something wrong. I'm still getting the "Failed to find securities matching []" error. How do I make sure the notebook I have is using the latest version of Pyfolio? And where do I get the latest version of the notebook in Research?

Thanks.

Hi Sari, in Quantopian Research, you will automatically be using the latest version of pyfolio. Can you try running bt.create_full_tear_sheet(estimate_intraday=True)? Let me know if that works.

Oh, my apologies, Gus. I thought by default estimate_intraday is set to True and that we'd need to set it to False to go back to the previous functionality per your post above. Don't know why I didn't think of explicitly trying to set it to True :) In all cases, yep, it works. Now time to dig into the results :)

Thanks a million.

No problem. By default you shouldn't have to specify that argument — pyfolio will detect if it's needed. So instead, you can probably simply do bt.create_full_tear_sheet().

I know that hadn't worked, Gus. Unfortunately I don't remember which backtest I ran it against because I've tweaked the algorithm since. Anyway, if I run into problems again I'll be sure to let you know. Thanks again.

Hi,

I am buying about 137 shortlisted stocks on the 1st day of the year and selling them on the last day of the year.
results.orders has all the orders that I want to execute over the period of time.
The data frame that I am using to give as input to context.df looks like the below

Please could you advise why i get the "too many values to unpack" error when i call pf.utils.extract_rets_pos_txn_from_zipline(results). It does work fine when I am using a smaller data set.

 quantity stock_date stock_symbol  

0 10 2008-01-02 MMM
1 19 2008-01-02 NBL
2 51 2008-01-02 NKE
3 30 2008-01-02 NTAP
4 11 2008-01-02 NOV
5 16 2008-01-02 NDAQ
6 99 2008-01-02 MNST
7 6 2008-01-02 MON
8 25 2008-01-02 MSFT
9 33 2008-01-02 MCHP
10 6 2008-01-02 MTD
11 17 2008-01-02 MDT
12 16 2008-01-02 MCD
13 23 2008-01-02 MKC
14 58 2008-01-02 MAT
15 34 2008-01-02 MA
. . . 138 -58 2008-12-31 MAT
139 -23 2008-12-31 MKC
140 -16 2008-12-31 MCD
141 -17 2008-12-31 MDT
142 -6 2008-12-31 MTD
143 -33 2008-12-31 MCHP
144 -16 2008-12-31 NDAQ
145 -6 2008-12-31 MON
146 -99 2008-12-31 MNST
147 -11 2008-12-31 NOV
148 -30 2008-12-31 NTAP

returns, positions, transactions = pf.utils.extract_rets_pos_txn_from_zipline(results)

ValueError Traceback (most recent call last)
in ()
----> 1 returns, positions, transactions = pf.utils.extract_rets_pos_txn_from_zipline(results)

ValueError: too many values to unpack

Jacob

Guys, at the risk of asking a question that's been answered many times, must one share an algo with the community in order to run a backtest on it in the pyfolio research environment?

Thanks,

You don't have to — you can just run a full backtest, then copy the URL from the backtest page like the following: https://www.quantopian.com/algorithms/<algorithm_id>/<backtest_id>. It doesn't need to be shared with the community.

Hi Gus,

Then it must be something else. I have run full backtest on this algo, In fact I have it running live. Yet I get following error when I run it in Research:

  0% ETA:  --:--:--|                                                          |

NoSuchAlgorithmTraceback (most recent call last)  
<ipython-input-1-e35c0508d6be> in <module>()  
----> 1 bt = get_backtest('58da8a6fcc7dd11ae7335af3')  
      2 bt.create_full_tearsheet()

/build/src/qexec_repo/qexec/research/api.py in get_backtest(backtest_id)
    158         client.get_sqlbacktest(backtest_id),  
    159         progress_bar,  
--> 160         backtest_id,  
    161     )  
    162 

/build/src/qexec_repo/qexec/research/results.py in from_stream(cls, result_iterator, progress_bar, algo_id)
    538         progress_bar.start()  
    539  
--> 540         for msg in result_iterator:  
    541             prefix, payload = msg['prefix'], msg['payload']  
    542 

/build/src/qexec_repo/qexec/research/web/client.pyc in get_sqlbacktest(self, backtest_id)
    216         Returns a generator of perf messages for the backtest.  
    217         """  
--> 218         resp = self._make_get_sqlbacktest_request(backtest_id)  
    219         with closing(resp):  
    220             for msg in resp.iter_lines():

/build/src/qexec_repo/qexec/research/web/client.pyc in _make_get_sqlbacktest_request(self, backtest_id)
    206         # TODO: Should this just plug into the handle_json_errors machinery?  
    207         if resp.status_code == 403:  
--> 208             raise NoSuchAlgorithm(backtest_id)  
    209  
    210         return resp

NoSuchAlgorithm: Algorithm with id '58da8a6fcc7dd11ae7335af3' was not found.  

Any inputs would be apreciated.

Serge, it appears that you passed in the algorithm_id mentioned in Gus' example. You will want the next ID, which is the backtest ID.

For example:

https://www.quantopian.com/algorithms/58e2b923225d32502a3ed28d/**58fe4f6466328e618988d287**

You'll pass 58fe4f6466328e618988d287 into the create_full_tearsheet function.

Is it possible to see the PnL Attribution in a Pyfolio sheet, i.e., how much return is due to the Q1500US stocks?

Hi Tim, unfortunately not currently! Although I do agree that PnL attribution is a very valuable analysis to have when looking at an algo's performance. Here's to hoping that pyfolio supports PnL attribution soon.

Is there a way to speed up the Bayesian functions (stemming from bayesian.run_model) when you run them on returns streams outside of Pyfolio? I get the following Warning:

WARNING (theano.configdefaults): g++ not available, if using conda: conda install m2w64-toolchain
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.

I performed the suggested conda install m2w64-toolchain however I still get the Warning and the performance is still painfully slow. It would take on the order of several weeks to run. Specifically it seems the BEST model is the slowest. Is there a way to speed it up? Perhaps by using python multiprocessing?

Many thanks!

This is something that I really need. Thanks for the share I will check it out today.

I've been looking for documentation on how daily value at risk is calculated - any tips? Cheers & thanks for this! :)


update! I think I found the best answer here: https://github.com/quantopian/pyfolio/blob/master/pyfolio/timeseries.py#L607

def value_at_risk(returns, period=None, sigma=2.0):
"""
Get value at risk (VaR).
Parameters
----------
returns : pd.Series
Daily returns of the strategy, noncumulative.
- See full explanation in tears.create_full_tear_sheet.
period : str, optional
Period over which to calculate VaR. Set to 'weekly',
'monthly', or 'yearly', otherwise defaults to period of
returns (typically daily).
sigma : float, optional
Standard deviations of VaR, default 2.
"""
if period is not None:
returns_agg = ep.aggregate_returns(returns, period)
else:
returns_agg = returns.copy()

value_at_risk = returns_agg.mean() - sigma * returns_agg.std()  
return value_at_risk