Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
July Contest Rules Update: Get Hedged

Our next contest kicks off on Wednesday morning at 9:30AM EDT, and it's time to update the rules.

People will remember that the significant rule change we made last month was to put in some strict limits on your algorithm's correlation to the market. This had a positive affect on the quality of algorithms that were submitted to the contest. We found a good number of promising algos for use in the hedge fund. Still, we found a couple of problems common to many of the algorithms that we want to score more accurately.

The first problem is excessively overfit backtests. When you see a Sharpe ratio over 4 (let alone 15 or 30!) in backtesting, you look at it with a skeptical eye; when that same algorithm has a terrible paper trading performance, your skepticism is vindicated. The scoring system catches up with these overfitted algorithms eventually. But still, it's not a good thing. People are spending too much time trying to maximize their backtest returns, and not enough time making a good algorithm that will likely perform consistently in the future. And, the leaderboard early in the month is pretty cluttered with algorithms that really aren't going to make it.

First big rule change: No more backtest score component in your overall score. Previously, your overall score was a blend of the paper trading and backtest score, multiplied by the consistency score. In July and going forward, your score will be just the paper trading score multiplied by the consistency score. That's going to remove any scoring incentive to maximize backtest results to unrealistic levels. The incentive should hopefully transfer to a) making an algorithm that performs well into the future and b) making the algorithm perform consistently over long periods of time. The downside to this change is that the leaderboard will have more volatility day-to-day, particularly early in the month, but even through several months in some cases.

Second big rule change: Your algorithm must be hedged. When we added the beta filter last month, it had the effect of removing most long-only strategies from contention. There are still too many algorithms on the leaderboard that are long-only; they are market-timing or momentum strategies. These algos tend to focus on a single stock and go in-and-out of that stock according to some signal. From our perspective, those algorithms have too much market risk and too much concentration risk. They have low beta because of their particular buy-and-sell patterns, but they are still susceptible to market movements or to bad news about a single company. Instead, what we'd much rather see is some sort of pair, or better, a long and short basket strategy. If you're using some signal to decide when to go long, we'd like you to use the same signal to find something to go short, and vice versa. We want you to hedge your market risk by always being appropriately long and short. As a practical matter, the scoring system will check your positions at the end of every day to verify that you're hedged with longs and shorts (or entirely in cash).

A couple other minor changes:

  • Your algorithm's backtest must make money. This is to eliminate algorithms that we'd simply never use, but managed to get lucky in paper trading.
  • Your algorithm must make trades in paper trading. This has always been true in practice, but the other rules changes made it a little more possible for a no-trade algorithm to win. Going forward, no-trade algorithms cannot win.

Looking forward

I'd like to apologize for the lateness of this post. I would prefer to have put this up 10 or 14 days earlier, but we've been working down to the wire testing these and other possible improvements to the rules.

One of the rules changes that didn't make the cut this time is a change to the contest duration. We want to make the contest period longer. On one hand we have overfitting of the backtest; on the other hand we have volatile paper trading results. The only way we see to improve the quality within those constraints is to make the contest longer. We haven't decided how to implement that yet. One possibility is to create an additional contest with a 3- or 6-month contest period, move the $100,000 prize to that longer contest, and give a smaller prize for 1-month contest periods.

As you write your algorithms, you should keep your eye on the long-term. The algorithm you submit today is competing in a one-month sprint, but it's also going to be competing in a marathon. They share the same starting line, but the finish lines are different.

Bad Prints

There have been some questions about algorithms that execute trades because a bad price print was received by our trading system. Our data vendor, like all data vendors, sometimes passes us bad data. Those bad prints might cause an algorithm to place a trade that it shouldn't, or skip a trade that it would otherwise have made. When this happens, the algorithm will be scored exactly as it traded. Even if the data is corrected later, there is no correction of the trade. We think of this as modeling the real world; you don't get to un-do a trade just because you got bad information. If your algorithm is sensitive to price jumps, you might want to add some logic to protect yourself from bad prints.

The one notable exception to this rule is that if your algorithm didn't actually make a trade, but is suffering on the leaderboard because of a pricing anomaly, please bring it to our attention so that we can correct it. For instance, if we get a bad print that says that Apple dropped to $1, and it looks like your algorithm suffered a 99% drawdown but immediately recovered, that is a correctable issue. The drawdown was fictional, the portfolio didn't change, and that type of pricing error can be fixed.

As always, feedback is welcome. The community has shaped this contest's evolution, and we appreciate the ideas and advice we receive.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

47 responses

like!

Can you further explain the "Your algorithm must be hedged" rule?

"The scoring system will check your positions at the end of every day
to verify that you're hedged with longs and shorts (or entirely in
cash)."

Does this mean a portfolio must always contain a short and long position (or all cash), every day? Ever since the low-Beta requirement was added, I have modified my algorithms to include both short and long positions, but never at the same time. Will these "binary" algorithms be disqualified under the new rules?

Good one Dan! These are the changes I was hoping to see.

You should quantify:
- The max percentage difference between long and short
- The minimum turn over in paper trade

A few more suggestions:
- Maybe splitting into 2 contests, with the 1 month rewarding small cash prices ($500/1K), and the 3 months get investments
- Rework the consistency score, this score has a lot of power (no other score can change the final score linearly), but it seems to me that it is not working right.
- Build a credit / reputation system for authors, based on out sample performance of each and every algos that they submitted, from submission to stop (or now if not stopped). This credit system will be useful in the long run when you guys select algos to invest in (outside contest winners), first pick not so shady authors, then pick good algos among them. Plus with this in place, you guys no longer need to restrict submissions / user.

A welcome change Dan.

A welcome change. . .
I agree with Tristan: evaluation criteria on Must be Hedge needs elaboration for clear understanding.

There is already lot of volatility in ranking. My algo moved from 110 to 83 to 183 in 10 trading days.
It seems clear that one needs to be lucky on last day of month to win the contest. Longer duration contest
will iron out this issue to some extent.

My suggestion is you will also have to relook at 3max algos rule if you move to 2tier contest (monthly and 3~6 months)
If my three algos are paper-trading for 3months contest and I want to add a newer/revised algo for next monthly contest
I must have quota for submission for monthly contest.

Surely these rules are driving my thoughts in "Low Beta" direction instead of "Beat the market" direction.
So I have new problem. I am overfitting backtest to meet lower beta rather than higher Sharpe ratio. I do not know
howmuch risk this brings when such algo is used for fund.

Looking at the difference between long and short positions does not indicate, necessarily, that an algorithm is hedged.

For example:

I could go 50 % long on the securities that make up the S&P500, and 50 % short on SH (short S&P500 ETF). This short on a bear ETF would effectively be a long position. By your rules, it would be perfectly hedged when in fact not hedged at all.

On the flip-side, my algorithm, which is currently performing very well in your contest, is hedged more or less always and yet very frequently holds only long or short positions.

For example:

I could be 50% long on the securities that make up the S&P500 and also 50% long on SH (S&P500 bear ETF). Though I would have no short positions I would be perfectly hedged!

This rule change will effectively eliminate any algorithms that conduct statistical arbitrage on ETFs. Q should seriously reconsider their mechanism for scoring this.

Also, why does Q not include the Sortino ratio for scoring! Sharpe ratios are useful but can actually penalize algorithms that are profit randomly. Consider returns that look like an ascending stair case; would that really be considered bad? This often happens in statistical arbitrage where the biggest price discrepancies occur at random times.

Are these rules for the July or August contest? I believe we should be entitled to some notice - one calendar month seems fair - and not to having the goal posts moved 21 hours before the start of the game.

Really like these changes. Thanks Dan.

For the competition, do we mean hedge by position value or hedged by position risk?

e.g. If I'm long an expensive stock, e.g. PCLN is $1142, then being short SPY $206 to the same dollar value seems unreasonable. Obviously it depends on the beta of PCLN, or an ATR-based risk measure to determine the hedge amount.

Hedging may be done by two ways:
1 Buying negatively correlated assets.
2 Selling short positively correlated assets.
Is it the rule that algo should choose only choice 2?
Adding more constrains in already overcomplicated unbalanced ranking system with unnecessary and unproved metrics like "consistency" and "stability" will not help.

I completely agree with Taylor's point about hedging -- by using inverse ETFs, one can be well hedged while all long or all short. And as Vladimir points out, one can also hedge by being long a basket of negatively correlated assets. The new hedging rule would eliminate two of my algorithms.

If I remember correctly at one point Simon mentioned the biggest thing eating into his algo profits (the only profitable algo currently I believe) is the interest costs associated with running a short position. If I write a algo that is making 10% with low drawback high Sharpe and I have to be hedged I will lose a large portion of the profits in interest expenses. I guess what I am asking is if in the future Q has some kind of plan to lower the interest expenses on at least the contest winning algos if they are wanting them all the be hedged?

Dan,
Thanks for the clarification on bad prints. I will join the chorus on the comments requesting more clarity on the hedging requirement, which I like in concept but believe is flawed in execution. I am concerned that this potentially closes off a rich field of trades between long and short ETFs. I have a very well hedged algo that has been running for months with almost 0 beta, a max .5% drawdown, no volatility... in short all the qualities you're looking for. However it involves being on the same side of long and short ETFs to achieve the hedge, something that appears to be disqualified under these rules even though it is hedging in exactly the way you want.
My suggestion is that you allow all algorithms to compete in the contest, however you just manually review the portfolio of the winning algorithm at the end of each month to see if it was hedged, either through long and short positions or holding both long and short ETFs. If not, you disqualify it and move to the next. Since you're only looking at a couple algos, it shouldn't be too labor intensive and won't require you to programmatically address it, while at the same time not arbitrarily cutting off a whole range of strategies that are actually doing what you want them to do. There's a real danger as you tighten the requirements that you inadvertently cut off all the unconventional strategies that deliver alpha because they don't neatly fit into a bucket that's not too hard to program around (witness my pet peeve of no 2x or 3x ETFs even fully hedged, now this). Given that generating alpha almost by definition requires unconventional strategies, you risk ending up with nothing at all.

If I were planning to make a hedge fund of random strategies, I think I'd be worrying more about the correlation of returns, correlation of volatility, timing of volatility/return clusters/drawdowns (though I'm not sure how one could measure that), and the timing of capital usage. If you only have one strategy, you care a lot about the Calmar ratio of it, but if you have 100, I suppose you'd care more about the timing of those drawdowns, no?

EDIT: oh and I of course agree that long/short can't possibly be all that informative about the market exposure of an algo, for all the same reasons!

Kevin,

We are definitely on the same page here. Ruling out leveraged ETFs already made statistical arbitrage hard enough. This
new and ill-conceived rule will totally rule these strategies out. Meanwhile, it sounds like both of our strategies involve hedging. Ultimately, we should rely on the established metrics. New rules just open opportunities for gaming the system.

Q? Are your puppet string being pulled willynilly? These changes are starting to appear to be externally sourced. Are your big bankroll clients getting touchy about how their money is to be invested? What's coming next? Liquidity restrictions? A boost in the paper trading account to prove that these algos can handle shuffling millions back and forth? Minimum numbers of instruments traded or held at any one time?

Whatever you do, try and get back to your quantitative roots eh? For instance, this whole beta cut-off thing. What kind of arbitrary hack was that? You want to select for low beta? Then overweight it; [10 x (1 - abs(beta))] would have given you a measurement heavily weighting strategies with near zero beta, and severely punishing those with high beta. The point is, create a measurable consistent metric for every point of contact you need to evaluate the efficacy of a strategy. Not some conjurers fabrication slapped down by your econometric lawyers.

Hedge? How many ways can you quantify and measure hedge without placing some paranoid stipulation upon all of your players strats? Surely you can figure something out no?

Monitor it. Measure it. Meter it. Manage it. But do so using consistent, continuous and transparent metrics.

Great changes!! Finally, we can all concentrate on making algos with consistent trading strategies. Low beta, checked. Hedged positions, checked. Emphasis on consistency, checked. All not revolving on backtest overfitting, checked. The ability of our algos to survive, be profitable and consistent on the real markets: priceless.

Good changes. But I would like to see Quantopian enabling historical fundamental data set in the near future. It is the best way to decide what to long and what to short. Is it not?

As you write your algorithms, you should keep your eye on the long-term. The algorithm you submit today is competing in a one-month sprint, but it's also going to be competing in a marathon. They share the same starting line, but the finish lines are different.

I understand the spirit of this, but in practice, I think it implies that if a contest algo crashes or needs to be stopped for a tweak, it'll be disqualified. Perhaps there could be some mechanism to evaluate an algo after it has been re-started to ensure that it hasn't been changed substantively? Maybe your already-established consistency evaluation could be applied? For the hedge fund, it seems you'll need something like this anyway, since Q engineers won't have access to the code. If a manager's algo crashes or needs an update, then he'll need to fix it. So, after fixing, it'll need to be evaluated using only it's "exhaust."

These algos tend to focus on a single stock and go in-and-out of that stock according to some signal. From our perspective, those algorithms have too much market risk and too much concentration risk.

I don't get it. In the end, I think the "crowd-sourced hedge fund" concept is to have tens of thousands of aspiring managers scouring the globe for anomalies. Say one of them stumbles upon a stock that can be traded profitably, at relatively low beta. And it looks like $1M-$5M in capital could be applied to the strategy. I guess you are saying that even if that algo is mixed in with hundreds to thousands of others (recall, you need to get to $10B in capital), the risk won't be diversified away? Maybe I'm thinking about it incorrectly, but I thought the idea was that if you cobble together enough such uncorrelated strategies, the zigs and zags will cancel, and you will achieve financial nirvana.

I have to wonder if you really don't have faith in the crowd-sourced hedge fund concept, which is really the novel idea for Quantopian, right? Instead, you are saying, "We are going after large institutional investors who are accustomed to putting money into large, diversified, hedged funds, so every one of the algos we pick needs to share those characteristics." Your initial fund will only have 10-20 algos, so they all need to be mini versions of your competitor's big honkin' funds. There will be no magical crowd-sourced diversification. The risk is that fundamentally, you may be playing a "me too" game, where in the end you don't bring anything new to the mature hedge fund market (unless maybe you can offer a bargin, since you have no direct up-front R&D costs for algo development). You seem to have a very specific set of strategies in mind (e.g. "some sort of pair, or better, a long and short basket strategy") which no doubt mirror what your competitors offer. Just make sure that you aren't reducing "crowd-sourced hedge fund" to "a collection of mini traditional hedge funds" or you won't have anything unique.

We found a good number of promising algos for use in the hedge fund.

What does this mean? Have you notified the owners of their algos' goodness? Will the algos continue to paper trade? Are you putting money toward them? Could you share their stats as exemplars?

I like the changes generally, even though late.
To make a lot of folks happier, consider shifting July forward a few days?

After thinking about this more, I disagree with shifting the burden of implementing "bad print protection" to the algo writers. Here are my reasons:

1) Quantopian is in the best position to detect bad prints. Quantopian has a relationship with the data feed vendor, and it also has full access to the feed process. This means you can automate processes to identify and correct for bad feeds, including comparing feeds to a second data source. Since Quantopian does not provide high-frequency trading (we have 1-minute precision), can't you wait a few seconds to get a confirmation on the accuracy of the feed from a second or third source? (This is similar to the NTP time protocol or server clustering protocols that require a majority vote to determine the right values)

2) Quantopian has stated that: "You don't have to manage the fund operations, chained to your desk every day. We will operate the algorithm for you." When users have to monitor feeds and compare them with a second feed to determine if the algo has been dealt a "bad feed", this does not agree with your statement. Aren't we supposed to be focusing on generating alpha, and not worry about the mechanics of the system?

3) Every algo writer is going to implement "bad print protection" logic differently (and perhaps poorly), thereby increasing the risk to Quantopian's investors. It is in your own best interest to make sure our algos get accurate feeds, without bad prints!

In summary, I will be honest and say you guys scored 1 for 3 on this update. (I agree with eliminating the backtest component. I also agree with increasing paper-trading time requirements, but it sounds like that is a future change.) Please keep working on this and improving, as you have been doing. Thanks for listening to our feedback.

Thanks for all the feedback. Putting out a bunch of responses:

On hedging: An algo does have to be long and short every day, or in cash. An algo that is long and then short isn't hedged. An algo that is only in one direction is exposed to market risk; if it's a single-stock algo, it's also exposed to single-company risk. However, the check is not so sophisticated that it's going to try to estimate the market beta of your hedge; it's a very naive check. I believe the spirit of this rule change should be clear: we are looking for algos that maintain a smart hedge at all times.

On inverse ETFs: It is correct that this rule change effectively eliminates algos that are built on inverse ETFs. We'd much rather that you build your hedge yourself than use an inverse ETF. Inverse ETFs look good on paper, but in practice can be difficult and/or expensive to hedge. They are more expensive than a regular hedge. There's also a missing feature in the Quantopian platform that makes this worse: borrowing costs. When we add borrowing costs to the system, inverse ETFs will look much less interesting.

On statistical arbitrage in general: While these rule changes dramatically reduce the opportunity for ETF stat arb, the improvements being made to the Quantopian platform make stat arb in general much more possible. Larger tradeable universes, speed improvements, and corporate fundamental data are all here. Other changes in universe selection are coming.

Ahn: Thanks for all the contest suggestions.

Yagnesh: Thank you for the suggestions. I agree, it might make sense to increase the number of entries in the future. On the topic of overfitting, the best advice is, don't do it =). We're trying to make a scoring system that keeps overfit algos out. If overfitting is working, then you can expect a rule change coming that will make it stop working!

Taylor: We looked at Sortino, but we haven't found a reason to add it. We may in the future.

Andre: Sorry for the late notice. I will work to make next month's rules come out sooner.

Spencer: The borrowing expenses are set by the broker we use. It's in our interest, and yours, for us to negotiate them as low as possible. For now, we're all paying what IB sets as a rate.

Kevin: I very much agree that we have to be careful not to "tune out" good, unconventional strategies. I worry about that all the time. It's one of the big challenges because the process for choosing algorithms for the fund is still fairly manual, and replicating that rule set without the nuance of human judgement is very hard. In this case, we're tuning out ETF shorting, and we're OK with the risks of that decisions. Shorting ETFs just doesn't scale.

Simon: Point well taken. What we're trying to do with these rule changes is to make sure the hedge fund starts with the highest quality algorithms. That high-quality starting place will make the correlation concerns much easier to sort out. The contest is more visible than the other activities, but they are all happening.

Market: No lawyers or investors were harmed in the making of this post, no snap referendums were called, and no astrologers or technical analysts were consulted. More seriously: the rules we're applying in the contest are the same ones we're putting on the fund, as closely as I can reasonably replicate them. A high-beta algorithm isn't getting in, no matter how good it looks in other respects.

Ujae: Are you asking for a history() type function for fundamental data? It's coming. We want it too!

Grant: That's a very good point. I'd love for the ability for an algo to be stopped, tweaked, and re-entered into the contest. Unfortunately, I don't have a good way to tell the difference between a tweak and a re-write. Consistency checking works for comparing in- and out-of-sample, but once the code is edited, the only out-of-sample data is in the future. We haven't found a solution so far.

You also had some comments on single-stock risk. The notion that we (or anyone) could put together a number of single-stock risks into a financial vehicle relies on being able to convince oneself that the single-stock risks are all uncorrelated. That is a very tall order, no matter how much backtesting one does, when one doesn't really know what's in them. On the other hand, a hedged strategy is easier to demonstrate as uncorrelated with other hedged strategies. I think that in the long run we will be able to be big enough that we can put together those single-stock, uncorrelated algos. For the startup period, though, we need algos that can almost stand on their own. As we get more mature we might raise our risk tolerance.

I'm somewhat nonplused by the idea that we don't have faith in a crowd-sourced hedge fund when I like to think that everything we do is working towards that goal. It is absolutely true that there are parts of our business model that we are copying from existing businesses. We are a crowd-sourced hedge fund. That doesn't mean we have to do absolutely everything in a new way. The challenge is to separate the practices we should copy from the practices we should discard.

Tristan: I fear my writing wasn't clear on this one. It wasn't my intent to shift the burden for bad print detection to algo writers. It's actually a shared responsibility. We try to ensure high data quality and we will continue to improve quality as we go forward. Still, bad data happens. When mistakes happen, some of them can't be undone. None of us can call the broker and ask for a do-over on the trade. As you note, our interests are very much aligned. We all want to manage bad prints as best we can.

Bad print detection is like leverage management, or even the schedule() function. As we share our code for these routine, commodity functions, we all get better at it. The best ideas eventually get added to Zipline and everyone gets the benefit.

Thanks, again, for all of the feedback.

Well, I'm very curious to see what the July contest brings!

I think Q’s aversion to inverse ETFs is very misguided. Inverse ETFs only have borrowing costs in a short position. A short position in an inverse ETF is a net long position. This is why the long / short rule does not make sense. A portfolio would appear to be hedged when it is in fact not hedged at all. Alternatively an entire long portfolio could be hedged with long positions in inverse ETFs.
Inverse ETFs are also not necessarily a costly hedge. Only over long periods of time are issues like convex returns and rebalancing costs an issue. IB offers a per-share commission model. A long position in an inverse ETF can be switched in and out of cheaply for this reason. Switching in and out of the potentially hundreds of underlying securities can be exceedingly more expensive. Moreover, it is much harder to achieve a perfect hedge. Suppose an algorithm needs a $5000 S&P500 hedge; with securities like AAPL trading at $120, and the inability to buy fractions of share, it is literally impossible to achieve a perfect hedge. There is absolutely no way to get the proportions right. That is one of the many reasons why inverse ETFs are legitimately very useful.
ETFs also enable to us to get exposure in markets other than equities. Suppose my algorithm wants to get into an undervalued industry, hypothetically oil and gas. However, I want to hedge energy price risk. As of right now, ETFs are the only way for us to trade commodities, other types of futures, and currencies. Volatility ETFs allow for hedging of volatility risk.
Not to mention, the long / short rule effectively forces a short position in an ETF. Rather than going long in an inverse ETF and hedging the position cheaply, Q will be encouraging us to go short on a long ETF. That will introduce borrowing costs when they aren’t needed.
Borrowing costs are bad, but they aren’t so bad that we should avoid shorting altogether. A very expensive-to-borrow ETF will have borrowing costs as high as 12%. That seems like a lot, but over a period of one day it is negligible (less than a tenth of a percent). Some ETFs are about 3% / year. When returns are over 20% annually, that’s not going to kill the strategy. When borrowing costs are added to the system I will still be interested in shorting ETFs, even if there’s no shot at winning the competition.
Okay, sorry, I am done complaining. I think you guys are doing an awesome job and I really appreciate your including us in the discussion. I loved the introduction of beta into the scoring process. A longer period seems like a great idea to me. And I am excited for the other features you mentioned. Thanks!

Thanks Dan,

Perhaps it is a matter of my lack of experience, but I don't understand:

these rule changes dramatically reduce the opportunity for ETF stat arb
we're tuning out ETF shorting
Shorting ETFs just doesn't scale

It almost sounds like you don't want to see any ETFs in contest/fund algos? Or are you just referring to inverse ETFs?

On a separate point, it is a Quantopian sacred cow, but in the end, I think someone on your end is gonna need to look at algo code licensed by managers. You might as well open that can of worms now ("Oh no! We'll never get anyone to submit an algo to the contest if they are worried we'll steal their precious code!"). The way "to tell the difference between a tweak and a re-write" would be to set up a proper revision control system (e.g. along the lines of github), so that when an algo is stopped and tweaked, you could review and approve the changes. I've also seen that you'd really like to understand the strategic intent of algos. From the get-go, I've figured that the best approach would be for a prospective manager simply to review the code with Quantopian (as I understand, this would always be an option, and those managers who opt to do it would be at an advantage). If I were an institutional money manager, I'm not sure I'd want my $100M going into an algo with code that had only been reviewed by the author. And I wouldn't want to be the Quantopian employee to make the call, "One of our algos crashed...uh...we're not sure why...uh...we lost $5M...we're trying to get hold of the guy who wrote the code, since we can't access it...click" and there goes your $100M in capital from Mr. Institutional Investor. In other words, I think this business of not reviewing the code of algos isn't gonna stick, so why not sort it out now? You're headed down that path already, with increasingly more constraints on strategies. The final step in winning the contest could be for Q staff to review the top 10 algos, and pick the one that would be best-suited for your hedge fund.

I think someone on your end is gonna need to look at algo code licensed by managers. You might as well open that can of worms now ("Oh no! We'll never get anyone to submit an algo to the contest if they are worried we'll steal their precious code!"). The way "to tell the difference between a tweak and a re-write" would be to set up a proper revision control system (e.g. along the lines of github)

For me it's hard to believe that noone is looking at my code if it was in top 5 ranking. But anyway this beta thing is teaching me few lessons hence sticking to Q.

My Idea of algo treading is how big and fast are returns PERIOD.
I would put my 20% HighRiskHighReturn investment in algo trading and not my 80% LowRiskSteadyReturn

Just to point out, even with professionals looking over your code and a whole team of programmers things can still go wrong -- Knight Capital springs to mind. Some of that risk can be dampened by having diverse algos (and algo writers!)

@Taylor I wouldn't call your most recent comment above "complaining" at all. It is insightful, relevant feedback, which we will absolutely take into consideration. Keep it coming!

@Grant You may be right that we will end up needing to review the code of algorithms that are invited to participate in the fund. If we were to start doing that, it would (obviously) be only with the knowledge and consent of the algorithm owner, and we would probably compensate the algorithm owner for letting us see their code, independent of compensation for performance. In other words, while "You can manage money in our fund without us ever seeing your code" might need to change, "We won't look at your code without your consent" never will; it is a bedrock value of how we do business.

@Yagnesh Hard to believe it may be, but I assure you, no one is looking at your code without your knowledge and consent. We don't even look at the code of algos we disqualify for gaming the contest. We've developed sophisticated tools for detecting gaming without needing to see the code, and we will continue to enhance those tools over time.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

The sorts of strategies that these changes will herd the crowd towards are the same ones which the "stat arb" framework will more effectively enable the creation of, so perhaps this means that module is nearing completion?

@Dan I thought I understood your point regarding Your algorithm must be hedged. At least I thought I understood it until the scores for July started being published. I'm running three algos and they are all hedged and by hedged I mean long and short concurrently using some variant of linear regression. The contest has scored it otherwise (all three are not hedged according to leaderboard).

Can you provide some implementation example?

Just one quick question. I was wondering what is to stop someone from theoretically just going short one share of a stock to get the hedging requirement? Are you just checking if the algo is short or are you actually looking the see the percentage short each person is relative to the 100,000 just thinking of the different possible ways someone could still kind of "game" the system.

Before spending umpteen hours developing an algo, it would be good to understand precisely what the rules are. I have sort of an idea what you mean by Your algorithm must be hedged but it sounds like if at the end of any given day, something doesn't equal exactly zero i.e. "hedged with longs and shorts (or entirely in cash)" then the algo will be penalized or completely rejected.

Huh? Could you be a bit more quantitative? It is called Quantopian, right? As in quantitative? Something should be zero, but what? And how close to zero is good enough? And if outside the "good enough" range, what is the penalty? I realize this is all kinda murky and we are part of an experiment funded by your VC's (shh...don't tell them), but some guidance would be helpful, since there's no point in spending lots of time coding, only to find out the requirements were not well understood.

I concur with Grant's observation and I have posted earlier. Do we defined hedge as dollar neutral or delta neutral or beta neutral? A clearer "quant" guidance would be very much appreciated.

Hi quants,

I understand that the "hedge" requirement feels a little squishy. As many of you have pointed out on this thread, defining whether a strategy is hedged or not isn't always simple. However, we did choose to start with a very simple check, which is that your strategy must hold both long and short positions at the end of every day, or be 100% in cash.

Taylor - you make a very fair point that there are many valid and profitable strategies that might not hold short positions or might be long inverse ETFs as their hedge. In addition, when trading smaller portfolios there is a real trade off between added transaction costs for shorting the underlying assets versus being long an inverse ETF. The decision to passively exclude strategies using inverse ETFs as their hedge is one I'd certainly consider revisiting in the future - but our assessment of that trade off today came down in the favor of asking people to build strategies that are hedged by taking long and short positions.

Anthony - I'd be happy to take a look at your algos and make sure we're getting it right. For an example implementation of a hedged strategy you can check out the new mean reversion sample algorithm.

Spencer - since this rule is so simple it is definitely possible to 'game' it and try to trick the scoring into thinking your algo is hedged when in fact it is not. Personally I'd encourage everyone to think about the long game and build algorithms that we can both profit from in Quantopian's fund as we get closer and closer to launch, since hedged strategies are the first cohort we'll be selecting for the fund. That said, we will do a manual review of the winning algorithm's tearsheet to verify that the strategy has adhered to the spirit of this rule and not just the letter.

Grant - your algorithm must hold both long and short positions, or be 100% in cash, for every day of the 2 year backtest that runs when you submit. The penalty if your algo does not earn this new hedge badge is that it will be ranked below all algorithms that earn all available badges.

I also just wanted to thank all the folks on this thread who have shared opinions and feedback about the contest scoring process. Your free time is valuable and we understand what huge investments you all make to submit great algos. It's our responsibility to run a credible, equitable and high quality competition that you can be proud to win. Good luck in July! -Jess

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

That's funny Jess, The Long Game.

I envision a Monty Python skit where there are these competitive archers with long bows all lined up shooting at a target hundreds of yards away. And down at the target there's little ol' Q waiting until the archers fire and then picking up the target and waddling it over a half dozen yards. "Aw, you missed. Better luck next month."

I get what you're trying to do, but this is just far too blunt of an instrument that was clearly thrown together hastily, and the requirements of which still haven't been adequately explained. Algo writers aren't going to look at it as gaming if they think the requirement is arbitrary, and excluding long/short ETFs or killing an algo that has months of paper trading behind it because of a single day without having both a long and short position are going to be seen as arbitrary, as is clearly indicated by the comments by Quantopian's top contributors on this thread. It also doesn't help to make the announcement of this requirement literally at the last minute, then caution ago writers to "think about the long game"! You shouldn't be surprised if this causes people to either lose interest or spend their efforts working around the requirement rather than with it. Again, this isn't a criticism of the underlying goal, its the implementation fail that should be the lesson learned here.

your algorithm must hold both long and short positions, or be 100% in cash

O.K. I'm still confused. The algo just needs to have at least one long position and one short (e.g. long at least one share of X and short at least one share of Y, assuming that I couldn't be both long and short X at the same time)? Or be in cash only? What if I have some cash but not 100%, and the value of my longs and shorts sums to zero? Or maybe the sum of the value of longs and shorts doesn't matter, and I just need to have at least one share of each?

And badges? What's that all about?

Maybe someone could update https://www.quantopian.com/open & https://www.quantopian.com/open/rules with the details? No criticism, but truthfully, I just can't quite sort out what you require and how the scoring works.

Will the hedging rule change as soon as futures are added to Quantopian? Take VX futures for instance. They are negatively correlated with SPY. Under this new rule, an arbitrage strategy which goes long both VX and ES futures would be tagged as 'unhedged'. That won't be the case when futures get here, right?

Jeffrey, I've tried arguing this point. A long position is not necessarily correlated with markets and a short position is not necessarily a hedge. The Q team knows that, and I think they're aware of the consequences of the rule. I don't understand it either.

@Market - The Long Game: my thoughts exactly.

Writing a good algorithm here requires three things at a reasonably good level: knowledge of finance; Python coding skills; and the Quantopian API, which is vaguely documented in many places. Some of us have more on one side, some on the other, few have it all. When you then play a shell game with the rules, announcing major changes 21 hours before they go in effect, it's practically impossible to get a decent score, much less win. It is in everybody's interest - algo developers', Quantopian's, its funders', and ultimately its investors' - that algorithms and their authors be judged fairly. We should be developing algorithms that are well thought out and carefully debugged and tested, not playing whack-a-mole.

@Kevin and @Grant,

Sorry for a bit of the confusion with some of these new rules. I'll see if we can update the broader contest rules link on the website with the new additions so everyone can see the rules in a single place. How Grant described it is correct: If you have a position on, you must have both a long and a short. That's really all there is to the new rule. You don't have to be 100% invested, or 100% in cash. You could be 10% Long and 1% short, and the rest in cash and that would be fine. It's a simple check, but we hope people take it seriously in an attempt to submit more hedged portfolios to the contest instead of what we've seen as simply being long a single stock that performed extremely well over the backtest period (You'd be really surprised at how many algos simply go long AAPL, AMZN, etc. Reducing this from the contest would be a welcome change). Unfortunately we have to put rules into place sometimes to improve the quality of the contest in a broader context (e.g. to filter out "known bad" or "known not interesting" algos) but it might affect a small cohort of algos that are actually developed under good intentions. We aim to improve all of this going forward and we appreciate your patience and feedback.

@Taylor,

I hear you on including or revising our metrics used for ranking. Now that the ranking portion of an algo's score is only going to come from its live papertrading performance I'm going to look into revising the metrics to include Sortino and Omega ratios. The initial reason to exclude these particular ratios that treat upside vs downside volatility differently is because in the case of when someone is intentionally severely overfitting their backtest, which usually amounts to making sure their worst losing days are converted into their best positive days (either by excessive parameter tuning, or [intentional/unintentional] look ahead bias etc), then Sortino and Omega become even more positively exaggerated since the upside volatility/gain are more or less rewarded. They would even become more exagerrated than the current backtest Sharpe Ratios of 30+ we have seen and which by now excluding any backtest metric ranks in the final score we hope to mitigate the ability for these severely overfit algos to win in the future.

Justin,

I'm very glad to hear that Q will be including the Sortino and Omega ratios. As I was mentioning before, an algo with returns that look like an ascending staircase really wouldn't be a bad thing, but with just a bit too much emphasis on stability its scoring wouldn't be competitive. Previously, this discouraged me from trying to write an algorithm that would basically just move in and out of stocks right before and after earnings announcements. In theory, if the algorithm worked, and even if it was always hedged, the returns would be too irregular and the score wouldn't be very competitive. I think this will seriously improve the quality of algos being submitted.

Justin,
I am glad to hear that at least 2 of my propositions: Sortino and Omega ratios, finally was taken into consideration.
https://www.quantopian.com/posts/request-real-world-strategy-scoring-metric
https://www.quantopian.com/posts/bug-in-consistency-score?utm_campaign=bug-in-consistency-score&utm_medium=email&utm_source=forums.
But there are another 2 you should think over.

Quantopian open June 2015 Average metrics of top 10 by stability (of loosing) and consistency (of doing nothing)

Quantopian open June 2015   annRet   annVol   maxDD   sharpe    calmar  stability  consistency  
Stability Best 10_pt       -130.52%  10.64%  -31.62%  -14.226   -4.835  0.973      0.807  
Stability Best 10_bt        -36.15%  14.27%  -73.24%   -3.590   -0.493  0.876  
Consistency Best 10_pt        1.31%  12.74%   -6.44%   -0.258   -0.194  0.128      0.962  
Consistency Best 10_bt       38.59%  15.54%  -13.62%    2.175    3.325  0.780  

https://www.quantopian.com/posts/how-consistent-is-consistency-factor
https://www.quantopian.com/posts/how-stable-is-stability-calculation
https://www.quantopian.com/posts/bug-in-consistency-score?utm_campaign=bug-in-consistency-score&utm_medium=email&utm_source=forums

I will leave discussion of the results to proponents of this innovating indicators, their bosses, high level Quantopian researches to make decisions: Do we need them as well as that green and blue belts?

Was a June winner of the contest announced? Where is it posted?

Dan announced it a few days ago. June Leaderboard

The final June leaderboard is now published. June is currently missing the correlation scores, those will fill in later. July and August leaderboards will be posted shortly, using the updated rules.

Congratulations to Michael VK on pulling out the win!

Blockquote

Michael is the winner of the June contest. We had made the announcement in that comment (thanks Anthony for the reference!) and here is the official announcement.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.