After much deliberation, we’ve decided to remove the paper trading feature from Quantopian. The service will be shut down after market close on December 31. However, you will continue to have access to the past results of any paper trading algorithms that you have in your account. We know that many of you use paper trading and expressed a desire for us to keep it around, so it was a difficult decision to make. Below, we will share what drove us to make this decision.
In another thread, several of you shared why paper trading is important to you. Additionally, we reached out to several paper traders privately, and learned about a few more use cases. There were a variety of responses, but a few themes arose:
- Some community members aren’t interested in an allocation or participating in the contest. For folks in this group, paper trading provides value that is otherwise only available by submitting to the contest.
- Other than the contest, paper trading is regarded as the best tool on Q for out-of-sample testing.
- Other than the contest, paper trading is the best way to track the day-to-day results of an algorithm.
These are all valid points. The platform is currently optimized for researching and developing contest algorithms, with an eye to getting an allocation. That said, our decision to remove paper trading is not about further optimizing the platform for researching contest algorithms. Instead, it’s about moving the platform toward a more consistent experience.
In its current form, paper trading runs on different infrastructure than any of our other systems, it uses different technology that we no longer use anywhere else, and it doesn’t work with datasets that have holdouts. The fact that paper trading has its own infrastructure means our engineering team has to spend extra time maintaining the service, and the fact that datasets with holdouts don’t work in paper trading means that paper traders don’t benefit from new datasets. Cutting the maintenance cost of paper trading will allow us to make improvements to the platform at a faster rate.
Looking ahead to 2020, we are starting to think about how to leverage existing features, like the backtester and pipeline, to support more use cases. The contest and fund both operate on top of our backtesting infrastructure. We would like to see if there are ways to build on top of these features to support a wider variety of use cases and potentially offer an alternative for testing strategies out-of-sample. In the meantime, we suggest running regular backtests on an algorithm that you want to test out-of-sample. The backtester is designed to avoid lookahead bias, so if you keep the code the same, it serves as an effective test.
Additionally, we are looking to create more opportunities for community members to earn money. We recognize that the constraints on fund algorithms are very strict, and we are looking for ways to make it possible for authors of other types of strategies (e.g. long-only, sector specific, global markets etc.) to get rewarded. We hope to have more information to share on this topic soon.
We know this is not what some of you wanted to hear. This was not an easy decision, and we understand the frustration of losing a feature that you use. While we don’t have any news to share now, we plan to look at other ideas for improving out-of-sample testing on algorithms that don’t meet the contest criteria.
Best,
Jamie