Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Truth value of a series
import talib

def initialize(context):  
    context.nvda = sid(19725)

def handle_data(context, data):  
    hist = data.history(context.nvda, ['price', 'high', 'low', 'close'], 50, '1d')  
    sma_50 = hist.mean()  
    sma_20 = hist[-20:].mean()  
    willr = talib.WILLR(hist['high'], hist['low'], hist['close'], timeperiod=14)  
    WILLR = willr[-1]  
    WILLR = -WILLR  
    log.info(type(WILLR))  

    open_order = get_open_orders()  
    if WILLR > 80 and sma_50 > sma_20:  
        if context.nvda not in open_order:  
            order_percent(context.nvda, 0.3)  
    elif WILLR < 20 and sma_20 > sma_50:  
        if context.nvda not in open_order:  
            order_percent(context.nvda, -0.3)  
    record(levarage = context.account.leverage)  

I am getting the error on line "if WILLR > 80 and sma_50 > sma_20:" saying that "Truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()". Can anyone explain me why?

EDIT: Nevermind , got it. It seems that i forgot to put that it is hist['price'] to do the mean.

3 responses

The variables 'sma_50' and 'sma_20' are both series. One cannot generally test the truth value of a series. That is why the error message is recommending Use a.empty, a.bool(), a.item(), a.any() or a.all().

The reason those are series is because of the statement

      hist = data.history(context.nvda, ['price', 'high', 'low', 'close'], 50, '1d')  

When the data.history method is provided a single asset but multiple fields (as in this case) the result is a pandas dataframe. When taking the mean of a dataframe one gets the mean of every column and NOT a simple scaler. Perhaps try the following instead (which will return a pandas series).

      hist = data.history(context.nvda, 'price', 50, '1d')  

Hope that helps.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Speed of history was tested using ['price', 'high', 'low', 'close'] vs each of those separately one at a time, once a day over 9 years.
Photo finish.

2019-11-08 06:31 timing:135 INFO avg 0.007745 lo 0.004089  hi 0.045839  history_indexing_into_dataframe  
2019-11-08 06:31 timing:135 INFO avg 0.007095 lo 0.003759  hi 0.043659  history_separate_calls  

@Blue Thank you for testing that. I've always wondered how much faster it is to fetch all the data at once as a dataframe or one at a time as separate series. Looks like not much at all (if any). It does sometimes make one's code easier to read fetching each data field separately. In the case of the original algo, fetching each separately may have avoided the confusion with sma_50 and sma_20 being series objects and not scalers?