Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
10 million with 2.8 leverage (2 sharpe algo)

I tried to test this algorithm on 10M to comply with the new contest rules and it performed decently.

20 responses

Hi Pravin!
Was wondering, is there anyway to run pyfolio against the above algorithm.
and/or to see the weaknesses and strengths of the above algorithm.
To gain a further understanding and perform more analysis on ways to improve it.
many thanks,
Best,
Andrew

Nice algo!

Just to be clear, this solving for portfolios hedged against the top-10 ICA-discovered factors, right?

@Simon. yes.

Here's a longer backtest. Seems like it doesn't do so well under all market conditions?

Here is long backtest of simple algo(50-50-252 SHY,IEF,SPY), Anthony FJ Garner idea, with the same leverage to compare.

Tell you one things chaps, you ain't gonna be able to borrow cheaper than the US Treasury (SHY,IEF)! You could of course look at the futures markets and see how an implementation works out there but this is assuredly not going to work paying brokers margin rates.

Another no brain 3 assets, rebalanced monthly , equally weighted algo, proved by bactesting back to 1870, has comparable or better results with leverage only 1.8.
Why do we need that long-short nano technology and pay additional margin rate?

I think the idea of using the first N components to hedge against is pretty neat, and will definitely cover the majority of the variance in the training set. Though as Grant pointed out and this was confirmed by my own playing with the algo, it looks like it fails out of sample, especially in times of crisis. The main issue I see is that this algo is making an implicit prediction that future variance in the selected equities will be the same or similar to the training set, that the selected components we find and hedge against will still represent significant exposure in the future.

Unfortunately this is further compounded by the fact we have also defined explicit parameters (e.g. 90 days in the original post) so we are making an even more specific prediction: the ICA risk exposures of the past 90 days will be the same over the next N days. I tried to fix this issue by doing a random sampling (n=100) of potentially overlapping time frames over the past year and then taking the mean weight returned by the function. It seems a little better in regards to the quality of the weights, turnover is significantly lower and the algo tends to move in and out positions in a much smoother fashion.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Here is a notebook comparing the original algo modified to only trade the top 100 to my version with sampling. The way it changes positions is quite noticeably in comparison.

Vlad why are you dissing other people's algos by posting other totally unrelated algos that have superficially similar characteristics? And it's not just you, there's a thick atmosphere of naively fatalistic skepticism these days, and the algo-peen one-upmanship has gotten out of hand. If someone's goal is to repeatedly try to demonstrate that algos are worthless, doing so on the message board of a company devoted to finding worthy algos seems like rude trolling, if not some kind of self-promotion (which has also been prevalent lately).

Pravin is one of the few people consistently continuing to share non-trivial ideas...

EDIT: You know what, forget it, it's none of my business. You all do what you like, I shouldn't criticize what people want to post on an open forum.

I've seen a variety of posts, as well, along the lines of "Why would anyone ever consider some complicated, beta-neutral algo dealing in large baskets of stocks when one could simply stir together a few ETFs, with perhaps a little spice, and call it a day?" I think it is reasonable to throw up a few baseline examples, as Vlad did. Asset allocation is a valid style of systematic investing, although perhaps not what Q is interested in at this point. Part of the issue, I think, is that Q has not really explained what they are doing, and why, and why the type of algos Vlad and others have posted are not attractive (or maybe they are--who knows). Certainly, if it is true that one can achieve similar long-term returns with the same or less leverage, then it is worth a head-scratch to consider if anything more sophisticated is justified.

As far as Pravin's post, it looks like something that might do o.k. in the contest, so more power to him. If someone can explain what it does in a few clear paragraphs, I'd be interested (no references to papers, no fancy math terms--simple, intuitive talk only). What's that code doing?

It's using Independent Component Analysis (the same tech that can be used to isolate specific conversations from recordings of cocktail parties, very cool) to identify 10 common independent drivers of the returns of the top 500 stocks, if I recall correctly. Then it's using a convex solver to minimize the portfolios residual exposure to all those factors/drivers.

Simon,
Pravin is one of the few people consistently continuing to share non-trivial ideas...
I agree with that.
For the rest you wrote..Who are you to judge?
But this is topic for another thread.

Here's James' version, over a longer time period. Seems like you'd just end up borrowing a lot of money to approximate SPY.

The problem with this algorithm is that we assume that if we hedge the exposure that stocks will outperform. Probably we should find two portfolio (winners and losers) and then hedge their exposures and go long one basket and short the other.

For the strategy I'm playing around with, I don't attempt to find two equal-weight baskets, one long and one short. Rather, the relative weights of the long and short baskets are not constrained (e.g. at any point in time, the algo could be all long, or all short, or an arbitrary ratio). Beta is reduced to ~ 0 by adding a position in an ETF (e.g. SPY).

What is the basic outline of your algo? From Simon's explanation, I don't understand the principle behind it. Why would you expect it to be profitable? Or is it just expected to be a smoothed version of SPY that then gets boosted with leverage?

You might try a much shorter time scale (e.g. 5 days of minutely data, perhaps smoothed?).

very impressive! Pravin

Thanks Lake Austin. It fails to perform during 2008-09 crisis and I am still working to find out why.

well, no offense, my feeling is no pure stock strategy could survive in such 08-09 crisis and the best strategy is avoid.