why does the size of the bar count in data.history change the outcome of talib factors such as MACD and RSI? This is driving me crazy!! If i setup, for instance an RSI function such as
RSI = talib.RSI(price[stock], timeperiod = 18)[-1]
it should have the same backtest results if the data.history is
price = data.history(context.sec, 'price', 22, '1d') or if it reads price = data.history(context.sec, 'price', 100, '1d') or any other variant of bar count amount.
I don't know if I'm thinking about this the wrong way, but as long as the bar_count is longer than the RSI timeperiod, than the only thing that should change the backtest results should be a change in timeperiod. Is this not right? Can anyone explain this to me???