I watched the webinar, "Three-Dimensional Time: Working with Alternative Data" and couldn't quite tell what was being reported as currently part of the Quantopian API, and what is in the works. It also wasn't entirely clear what the webinar had to do with so-called "alternative data." I think the idea is that with some changes to the way Quantopian handles data loads and corrections, more data sets could be uploaded, potentially by individual users? And maybe the changes would make it possible for Quantopian to run backtests on user-supplied fetcher data and know if the data had changed and to what degree (so that algos using fetcher could be used in the contest and fund)? Does this have anything to do with the report that Quantopian aims to make a lot more data sets available (thousands, I recall)?
If anyone gleaned anything salient from the webinar, I'd be curious to hear it. Was it just an elaboration of what is described on https://www.quantopian.com/posts/quantopian-partner-data-how-is-it-collected-processed-and-surfaced? Or something more?
One use case of interest is the Precog data set (e.g. https://www.quantopian.com/data/alpha_vertex/precog_top_500). Is there any way to tell, from within code, what the in-sample and out-of-sample periods are?