Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Accessing the latest data in a batch transform
@batch_transform(refresh_period=1, window_length=10)  
def get_values(datapanel):  
    # We are looking for the min and the max price to return. Just because it's interesting  
    # we also are logging the current price.  
    prices_df = datapanel['price']  
    min_price = prices_df.min()  
    max_price = prices_df.max()  

    print("Prices:")  
    myDT=get_datetime()  
    print(prices_df[myDT,sid(8554)])

    if min_price is not None and max_price is not None:  
        return (max_price, min_price)  
    else:  
        return None  

When doing a batch transform, is there a way to access the last day's data? So for example, what if I wanted to return the last day's price divided by the 10 day average price (I know this can be done with simple transforms but how would I do it as a batch transform)? So if I call

print(prices_df[sid(8554)])  

I get the prices for ONLY sid(8554). But how would I ONLY get the prices for latest date? I tried

    myDT=get_datetime()  
    print(prices_df[myDT])  

and I get the error message

KeyError: no item named 2013-01-17 00:00:00+00:00 File
test_algorithm_sycheck.py:18, in handle_data File
/zipline/transforms/batch_transform.py:202, in handle_data File
/zipline/transforms/batch_transform.py:267, in get_transform_value
File test_algorithm_sycheck.py:77, in get_values File frame.py:1928,
in getitem File generic.py:570, in get_item_cache File
internals.py:1383, in get File internals.py:1525, in _find
block File
internals.py:1532, in _check_have

10 responses

Try

    log.info(prices_df[sid(8554)][len(prices_df[sid(8554)])-1])

That only prints the last price for one ticker. What if I wanted to slice the array so I was only looking at the last date, rather than slicing the array so I'm only looking at one column?

Hi John,

Here's some code that might be helpful. Once you get a trailing window of data from the batch transform as a numpy ndarray, you can slice it any way you want.

Grant

Thanks Grant - I appreciate the help

Is it really best to call the dataframe with .as_matrix? I feel like losing the indices would make the rest of the code more difficult. So, as an exercise in python (and not necessarily the best algo to trade on), I'd like to code, using a batch_transform, a function that calculates a 5 day average price, calculates the current daily price divided by the 5 day average price, ranks a list of securities and then trades the top security (security with current price highest above 5 day average price or has largest value of price / 5 day ave. price).

So I can display the average price and I'm stuck now on generating a series that displays the current price divided by the 5 day average price and then ordering that series and only buying the top one

Hi John,

I also think you should try to preserve the Pandas data structure rather than converting it to_matrix(). I think this does what you're asking:

@batch_transform(refresh_period=R_P, window_length=W_L) # set globals R_P & W_L above  
def avg_over_last(panel):  
    return panel['price'].mean() / panel['price'].ix[-1]  

When working with batch_transforms it's definitely useful to look at some of the basics of Pandas slicing and indexing functionality.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

I didn't realize -1 meant last entry of the index. That helps alot!

@John -
You could try something like this as well


def initialize(context):

    context.SPY = sid(8554)  
    context.XLY = sid(24)  
    context.XLP = sid(16841)  
    context.Universe=[context.SPY, context.XLY, context.XLP]


def handle_data(context, data):


    best_ave_over_last(data, context)  

@batch_transform(window_length=5, refresh_period=1)  
def best_ave_over_last(dp, context):  
    prices=dp['price']  
    #log.debug("Prices:")  
    #log.debug(prices)  
    aves=prices.mean(0)  
    #log.debug("Means:")  
    #log.debug(aves)  
    ranks=prices.ix[-1]/aves  
    #log.debug("Ranks:")  
    #log.debug(ranks)  
    best_rank=-100  
    best_stock=None  
    for stock in context.Universe:  
        if ranks[stock]>best_rank:  
            best_rank=ranks[stock]  
            best_stock=stock  
    #log.debug("Best stock = "+str(best_stock)+" with rank = "+str(best_rank))  
    return(best_stock)  

I should add that there might be an easier way to sort the ranking and grab the top stock. The loop just seemed most familiar to me. I tried using .sort on ranks without success. Obviously this might be of interest if you wanted to trade the top 3 stocks or something like that.

John,

Yes, working in Pandas is probably the way to go (I've been lazy getting up the learning curve). I just get the data into a format that scipy/numpy can operate on directly.

Grant