Hi Alisa,
Would you be so kind as to further elaborate what you mean by "the data is fed in point-in-time"? I suspect my confusion may be addressed if I understand that.
Here is what my understanding is and where I'm getting confused when it comes to fetch_csv:
First of all, I am guessing that what happens when I hit ctrl+B is that whatever data that is fed in through initialize(context) is stored in the data[] dictionary and subsequently iterated through line by line. The function handle_data(context, data) is then called at every iteration / line.
Similarly if fetch_csv[] is read once, then that must mean that it is stored in memory as well and there has to be some form of iteration through it.
So now what happens if I have a 60 minute by minute apple stock prices obtained from symbol() but as well, a csv file containing other apple metrics only at the 15min and 30 min mark, resulting in two iterations of different length. My guess is that the all the data is stored in memory, as in data becomes of
dimensions: = #time-stamps
x # symbols
x # extra metrics from csv
x # symbol parameters (e.g. close, open etc)
Is that correct?
If so, then what happens if I have multiple lines in my csv with the same timestamp and ticker?
Would you be able to provide me a general explanation of how the quantopian software reconciles this?
Thanks a lot for any insight!
Ray