Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Alpha Vertex PreCog test

From the looks of this chart, it appears to be over fit data. Very disappointing performance in 2017. Anyone else noticing such degradation in 2017?

23 responses

I noticed the same and that was my main concern with this Dataset.

Another attempt, slightly better but still draw down period is more than 6 months.

Yep. If you see the discussion on https://www.quantopian.com/posts/alpha-vertex-precog-dataset, Michael Bishop (one of the Alpha Vertex guys) claims that they are "hyper sensitive to both overfit and lookahead bias" but didn't offer up any evidence why we should think that there could be a problem. Your results show a big gnarly inflection point out of sample, suggesting they need to sharpen their pencils.

I should mention that I have a filter on profitable stocks (ebitda > 0) only. Maybe that is affecting the results. Will post a new backtest without the filter shortly.

Actually there could be a simpler test. We could take the top 500 stocks and measure the prediction score insample and out of sample and compare.

Is this really surprising?

Well, it is a surprise worth $135/month ;)

Well, I'd kinda thunk they would have done 6-12 months of paper trading before going live on Q, but maybe they rushed into things. It is kinda surprising that they'd go public without being pretty darn sure they'd latched onto something. It'll be interesting to see the results in a year or two.

I accept @Pravin test results and other member comments to the effect that the PreCog dataset might not be that predictive. I came to the same conclusion.

But, here is the thing.

Even if there might be no predictive powers behind the PreCog dataset, it might not matter.

For instance, I could not differentiate the dataset's advantage from what is available from the market. But, it did provide me with an excuse to get in and out of trades. And, for me, that was sufficient.

See my latest posts on that subject here: https://www.quantopian.com/posts/alpha-vertex-precog-dataset

The outcome of the trading strategy has even more merit if the dataset is not the reason for the alpha generation.

The alpha generated is due to the trading mechanics, the methodology used in this particular trading strategy. And since most of the program has been changed, I can not say that the original program is responsible for the outcome either. I changed the strategy's trading philosophy by controlling its entire payoff matrix, step by step.

It is like in any kind of software development, you can not do it all at once. You need to debug and test your code as you go along. And the first test is always, does the program crashes or not? And then, let's see the results. Did the program do what was requested?

One issue I see here is that there's no visibility into changes of the "algo" used by Alpha Vertex. I wonder if Q, as part of their arrangement with the vendor, gets any heads-up, or if Alpha Vertex can make changes as they see fit, without notification? In other words, say we wait a year to see how things play out. Will the Alpha Vertex team have been fiddling with the strategy all the while, so that it will be hard to sort out out-of-sample performance, relative to a backtest?

I sent an email asking them these questions but they never replied :(

Grant, Pravin has demonstrated in his first post that the PreCog dataset had no alpha. He also demonstrated that it appeared to be breaking down near the end, after their dataset release. Since then, Alpha Vertex has not come out to defend their approach or their data. Shouldn't that alone answer your question?

After modifying the program in the other thread, I found I could extract some alpha by first ignoring most of its trading procedures, and the composition of the dataset itself. I used the program's structure to generate trading activity that appeared more as a statistical excuse to trading than any kind of forecasting tool.

So this raises a funny question. If you totally ignore the predictive abilities of a dataset and still use part of it as an excuse to trade, are you in fact making predictions on that dataset? Even if it is just one in a gazillion possible subsets of the Q1500US?

Just doing a head-scratch on how such derived signals supplied as data feeds fit within the Q-sphere. It's kinda like plugging into some Q user's algo that he could change at will without notice. I guess if things are done well and consistently, it's all good, but if not, then there's no way to know what's going on. There's no prospectus for this kind of beast. It is surprising that it is even legal (the same could be said of some other signal feeds, as well). Caveat emptor, I guess. Interesting that they are not regulated.

@Grant, are you addressing the legality of what a data vendor provides or the legitimity of Q using it?

That dataset, or another, might not matter much. @Pravin has already shown that this one might not have any alpha. I concurred with his findings, saying that the trading strategy might be trading on market noise. And, if trading on market noise, you are trading with no available alpha coming from the made predictions.

If you make trades following someone else's predictions, and you do not make any money, then the answer is very simple: those predictions were no good, no better than the general market.

If the predictions had value, you would inevitably outperform just by using that dataset. And again, @Pravin showed it was not the case.

I found a slight difference in the dataset from what the market had to offer. The dataset's advantage was $ 0.27 on an $ 8,889 bet. Not enough to even say there was a difference between the general market noise and the dataset. But, I do not mind.

I see my job as strategy developer to extract some alpha from all the data no matter what. If I succeed, I can only attribute it to the trading methodology used since the data itself was of no real help (no predictive powers). And this alone makes the trading procedures used to generate the alpha valuable.

Updated comparison in-sample vs out-of-sample performance . Disappointing

IN SAMPLE
start_date = '2010-01-01'
end_date = '2017-02-01'

OUT OF SAMPLE
start_date = '2017-02-01'
end_date = '2017-09-10'

The question in my mind is how one might determine if the innards of the Alpha Vertex Precog black box are the same today as they were before, in-sample compared to out-of-sample (or more generally, versus time, simply attempt to detect changes, ignoring any knowledge of in-sample and out-of-sample time periods). It is not entirely obvious to me that the most recent 7-month period is anomalous. In other words, at a high statistical confidence level, can we say that the most recent 7-month period differs from any other 7-month period picked at random from the data set? Or considering the data set as a time series, is there a test that could be applied that would suss out statistically significant anomalies? And is there a major anomaly associated with the transition from in-sample to out-of-sample periods?

The problem I see with this sort of high-level black box signal feed is that it is then on the quant to develop protection against changes to the inner-workings of the black box (whether due to human fiddling or a lack of robust algo design or whatever). As I understand, it is completely unregulated; there is no obligation to notify anyone of anything. So, it seems one needs code to check periodically if something changed within the black box, based on its output time series.

@Grant, I understand your point but what can we do about it? Trying to evaluate how you can trust a black box is what Quantopian has been doing for some years where the black boxes are the algorithms built from the users. Sure, Quantopian knows the algorithms don't change once they have them, this is different from the datasets. But even so, if an algorithm uses machine learning, even without making any change in the code the model evolves and you need to decide somehow when stopping "trusting" an algorithm and when keeping using it. So the problem of trusting a dataset is indeed similar to the evaluation process of algorithms performed by Quantopian. It's not an easy task, I am pretty sure about it and Quantopian could add some information but the final answer is that either you invest lots of time building your "dataset evaluation engine" or you simply use a dataset that performs well and stop using it when it doesn't anymore.

Also, given that live trading is no possible anymore on Quantopian, the problem of trusting a dataset is now a problem for the Q hedge fund only. As the performace of a dataset is inherited by the algorithm performance , and as Quantopian has already in place a process for out-of-sample evaluation of their algorithms, I believe they have already solved their problem.

the problem of trusting a dataset is now a problem for the Q hedge fund only

Not really, in my opinion. The case in point brought to mind how it would be nice to have a "black box change detector" that would do better than waiting 6-12 months for out-of-sample data. If our intuition is correct, there is something fishy going on with the PreCog data set. It is also reasonable to think that the fishiness was an event in time (i.e. a regime change in the time series). So, I'm wondering if the right technique (e.g. http://scikit-learn.org/stable/) could be used to detect changes earlier.

I'd heard that Quantopian would like to offer thousands of data sets. Such a "regime change detector" might be useful, particularly if they include black box data such as the PreCog data set. The idea would be that if a regime change is detected, simply to drop the data set, with the notion that something in its black box was changed dramatically, and so it needs to be re-vetted.

Here's an update to Luca's notebook above. I'm not so experienced in interpreting these things, but it seems that the Alpha Vertex ML algo still hasn't sorted out how to make money.

Here's an updated backtest of Jamie's on https://www.quantopian.com/posts/alpha-vertex-precog-dataset (his backtest # Backtest ID: 58bde10db3fab35e38fef4cc). I only changed the end date of the backtest. I'll post a tear sheet next.

The tear sheet for the backtest above.

It is worth noting that Q did add a "warning" of sorts on their data store pages:

Note: Quantopian started collecting this dataset live on March 6,
2017. Why this matters: https://www.quantopian.com/posts/quantopian-partner-data-how-is-it-collected-processed-and-surfaced

@Grant, that does make the point that the Precog dataset might have been somehow “massaged” if not “doctored”. This should raise a lot of other questions.

Just as @Pravin had shown before that there might not be much alpha there, so does your tearsheet. Note that the strategy is making less than $3.00 net profit per trade on 62,851 trades. A slight change in fee structure, and apparently a little more time, and that might have disappeared too. Note also that the gross leverage came in at 3.46 and all it got was $2.94 a trade to pay leveraging fees which were not accounted for.