Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Pipeline appears to freeze under heavy load calculating historical rate of returns

I am trying to calculate the historic, or ex post rate of returns (daily) for each stock in the US Equities data set using the following:

from numpy import nanmean, isnan, nan, nanstd, rate, zeros  
import scipy.optimize  
from datetime import date, timedelta, datetime  
from scipy.stats.mstats import zscore  
from quantopian.pipeline import Pipeline  
from quantopian.algorithm import attach_pipeline, pipeline_output  
from quantopian.pipeline import CustomFactor  
from quantopian.pipeline.data.builtin import USEquityPricing

class AverageRates(CustomFactor):  
    inputs = [USEquityPricing.close]  
    window_length = 100  
    def compute(self, today, assets, out, close):  
        print close.shape  
        rows = close.shape[1]  
        rates = zeros(shape=(rows,))  
        for row in range(rows):  
            rates[row] = nanmean([(close[col+1][row]/close[col][row])-1 for col in range(100-1)])  
            print rates[row]  
        out[:] = rates  

def initialize(context):  
    pipe = Pipeline()  
    pipe = attach_pipeline(pipe, name='my_pipeline')  
    rf = 0.03/252  
    ind_avg = AverageRates(inputs=[USEquityPricing.close],window_length=100) - rf  
    pipe.add(ind_avg, 'ind_avg')

def before_trading_start(context,data):  
    results = pipeline_output('my_pipeline')  
    print results.head(5)

I'm not sure if that is the fastest way to calculate the daily growth rates of stocks in Quantopian, but it appeared to work initially. At first it printed out the output as expected (from logs):

1970-01-01initialize:56INFO<class 'zipline.pipeline.data.dataset.DataSetMeta'>  
1970-01-01initialize:57INFO<DataSet: 'USEquityPricing'>  
2011-01-04PRINT(100, 7811)  
2011-01-04PRINT0.00409921467387  
2011-01-04PRINT0.00492940692112  
2011-01-04PRINT0.0028075566189  
2011-01-04PRINT-0.000496942946778  
2011-01-04PRINT0.00344087158198  
2011-01-04PRINT0.00867885646423  
2011-01-04PRINT0.000847969853125  
2011-01-04PRINT0.00343880618348  
2011-01-04PRINT0.0162977327173  
2011-01-04PRINT0.0030401659518  
2011-01-04PRINT0.000382684323609  
2011-01-04PRINT-0.00412265497972  
2011-01-04PRINT-0.000416509413396  
2011-01-04PRINT0.00214895167804  
2011-01-04PRINT-0.000693733932113  
2011-01-04PRINT0.00381662803722  
2011-01-04PRINT0.00569411326117  
2011-01-04PRINT-0.000834365428449  
2011-01-04PRINT0.00105938366537  
2011-01-04PRINT0.00480923930981  
2011-01-04PRINT-0.00040027602352  
2011-01-04null:nullWARNnumpy/lib/nanfunctions.py:598: RuntimeWarning: Mean of empty slice  

Then everything stopped at that for 45 minutes until I gave up and began to write this post. The Backtesting % is stuck at 0.5%. I'm not sure if it has anything to do with that last log, the RuntimeWarning, but I highly doubt it.

3 responses

Hi Alexander,

In general, using for loops in Pipeline is a discouraged as they are much slower than using vectorized numpy functions. Check out Scott Sanderson's example of using nanmean in a Pipeline custom factor here. My suspicion is that the problem you are encountering here can be avoided by replacing the nested for loop with numpy functions!

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi Jamie,

I understand that using for loops is going to be significantly slower than using numpy's functions, however it should certainly not take 45 minutes if nothing else is causing the issue to compute growth rates on a 100x1788 numpy array.

Here's a little test run a created to simulate the calculation apart from quantopian (just using base terminal):

*Note: I know that time.time() isn't the absolute greatest in terms of accuracy but it is definitely accurate enough for this instance, and the timeit module I have never been able to get to work with numpy. *

test.py

import numpy  
import time

start = time.time()

all_loops = []  
all_numpy = []  
close = numpy.random.rand(100,1788)  
rows = close.shape[1]  
# simulates the actual close we would normally receive using the same dimensions  
for x in range(100):  
    aclose = time.time()  
    # using python loops  
    with_loops = numpy.nanmean([(close[col+1][row]/close[col][row])-1 for row in range(rows) for col in range(100-1)])  
    aloops = time.time()  
    all_loops.append(aloops-aclose)  
    aloops = time.time() # creating a buffer to avoid over calcing time due to append function  
    # using numpy formulas  
    pure_numpy = numpy.nanmean(numpy.rate(1,0,-close[:-1],close[1:])) # this is actually incorrect, but regardless  
    anumpy = time.time()  
    all_numpy.append(anumpy-aloops)


print "With loops time: ", numpy.mean(all_loops)  
print "Using Numpy formulas", numpy.mean(all_numpy)  

Results:

With loops time:  0.259696528912 #seconds  
Using Numpy formulas 0.0344589233398 #seconds  

As expected, numpy is quite a bit faster, but the operations are still super quick. I don't know why this would bog down the entire pipeline. If I did this outside of the pipeline, would it be faster? It makes me wonder, if it takes so long to do something like this, how possible is it to execute far more complex algorithms.

Cheers,

Alex

Jamie,

Solved the problem, my script to calculate the rates was wrong, should have used:

nanmean((close[:, 1:]/close[:, :-1]), axis=0) - 1  

Works great now, thanks!