Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
The contest is really hard ...

I've been taking a hiatus from Q. In the interim I've been doing a lot of manual trading, learning more and more about markets. I don't feel any closer to a successful Quantopian algorithm. The more I know, the less it seems possible.

I've only made one attempt so far at the current contest, and meeting the risk constraints sucked all the alpha out of my strategy, and it failed with a TimeoutException only a week or two after going live. Despite that, I am still in 38th place. Let that sink in... From this we can deduce that the number of actually viable strategies in the contest is significantly fewer than 38. Maybe the best entries are really good and make up for the fact that there are only a handful of them... Or maybe not. There's probably a reason why Q no longer shares the performance stats of the contest entries.

I have my opinions -- I think some of the contest criteria are counter-productive. For example, real quant funds scale their leverage up and down. Quantopian requires every single algorithm to more or less maximize its buying power regardless of market opportunity. I think this is a classic example of the folly of letting the marketing department tell the engineering department what features the product needs to have.

Moreover, I'm a bit skeptical of the Q fund premise -- that there is some simple, consistent, alpha-generating, risk-neutral statistical arbitrage edge left in the market. What are the handful of people developing algorithms on Quantopian likely to discover within the constraints of such an extremely limited platform that a real Quant firm without our limitations can't have found faster and better by utilizing automated discovery and optimization of parameters, such as via machine learning, not to mention access to expensive data sets.

Anyways, I am trying to have another go at the contest. But meeting the requirements is really hard. And if discovering institutional-level alpha wasn't hard enough, getting order_optimal_portfolio to behave is causing me nonstop grief.

What I find myself doing is running a backtest, tweaking the different constraints to try to nudge the optimizer into actually meeting the required constraints, and repeating that 20 times until it finally passes (and hopefully hasn't destroyed all the alpha in the process). Obviously this is not ideal. Not only is it an excruciatingly slow process, but it also means I'm simply overfitting my optimizer settings to Q's risk constraints. Since it's an overfit, the algo is likely to fail the contest requirements shortly after it goes live out-of-sample.

Does this sound right? Is nobody else having trouble with the order optimizer and getting it to do what you're telling it to? I realize that as prices move they can shift outside of the constraints, but why doesn't the order optimizer help more with this? Why doesn't it take extra measures to shift the portfolio within the requirements as these problems develop? We're forced to use the order optimizer, and so we have little recourse when it doesn't do what we want it to do.

Here's an example of the code I'm using:

    algo.order_optimal_portfolio(  
        objective=opt.TargetWeights( (context.output.weights) ),  
        constraints=[  
            opt.MaxGrossExposure( 1.04 ),  
            opt.NetExposure( -0.08, 0.07 ),  
            opt.FactorExposure(  
                context.outut[['beta']],  
                min_exposures={'beta': -0.25},  
                max_exposures={'beta':  0.2} ),  
            opt.experimental.RiskModelExposure(  
                risk_model_loadings=context.risk_loading_pipeline,  
                version=opt.Newest ),  
            opt.PositionConcentration.with_equal_bounds( -0.045, 0.045 ),  
            opt.MaxTurnover( 0.65 ),  
        ],  
    )  

Does this look right? Like I said, I typically tweak all the values until I find something that works. These happen to be the settings for the last successful backtest I ran, but it's always different. It always requires quite a bit of nudging. I've found I've sometimes needed to aim for a max gross exposure over 1.0 in order to not hit the minimum gross exposure, and sometimes it under leverages anyway. For net exposure I give the algo a tiny bit of leeway, in order that other constraints can hopefully be met. Is that the right approach? Beta is the most unpredictable -- sometimes I'll set max beta to 0 and it'll hit .50 anyway. Actually I'm amazed using a backwards-looking indicator to optimize for future performance works as well as it does... So I wouldn't be surprised for it to fail at any moment out-of-sample. Would it be possible for Quantopian to use R^2 or machine learning to improve the predictive capabilities of this constraint? RiskModelExposure for sectors actually works reliably. I would have thought PositionConcentration would only limit position sizes to a maximum specified, but it appears it creates an equal-weighted portfolio, with no concentrations smaller than the specified weight, which is not expected behavior. So even though I'm passing a bunch more stocks via the objective, it's tossing most of them instead of creating smaller positions. Am I correct on this? Turnover has given me quite a bit of grief, because it depends largely on pipeline and market conditions. There's no MinTurnover, right? I'm not sure how you get turnover right without overfitting. And finally, even though I'm using QTradableStocksUS and rebalancing daily, sometimes my backtests fail the Tradable Universe criteria... Not sure how to fix that.

Any tips on avoiding risk constraint overfitting? Any tips on getting the order optimizer to play nice?

31 responses

+1 Largely agree.

Some of the algos that I run, not one of them makes the contest, but I use them with real money on IB. Here’s the best one, it’s a SPY wvf adaptation for volatility.

@Viridian (Silver?) Hawk,

There's probably a reason why Q no longer shares the performance stats
of the contest entries.

If you click 'Download All Results' in the bottom righthand corner (underneath the leaderboard) you'll see all the performance stats of all the contest entries. Did they provide any other meaningful metrics in the old Q Contest?

@Joakim -- that's cool, I didn't know they'd added all that information into the CSV file. It's all in min/max pairs. Not sure how to make sense of that. Seems like min/max gives you no sense for typical performance, whether it's closer to the max or the min.

@RB thanks for sharing that, I'm surprised it can do that well at 100 million.

@JA Fields in a contest csv from around the beginning of 2017, removing bt for backtest & pt for paper trading, removing ranks, versus now, and removing min/max:

12/28/2016
annRet
annVol
beta_spy
corr
maxDD
sharpe
sortino
stability
score

6/19/2018
beta_to_spy_126day
cumulative_common_returns
cumulative_specific_returns
drawdown
exposure_basic_materials
exposure_communication_services
exposure_consumer_cyclical
exposure_consumer_defensive
exposure_energy
exposure_financial_services
exposure_health_care
exposure_industrials
exposure_momentum
exposure_real_estate
exposure_short_term_reversal
exposure_size
exposure_technology
exposure_utilities
exposure_value
exposure_volatility
leverage
net_dollar_exposure
position_concentration
sharpe_126day
total_returns
traded_in_qtradable_stocks_us
turnover
volatility_126day
score

@Robbie Blayzor - A couple comments on your algorithm. It appears to run at 2.6x margin/leverage. That inflates how successful your algorithm appears, since you're comparing it to a 1x benchmark (an anemic one at that). Also, if you isolate the SPY portion of your algorithm you'll notice it does not contribute any alpha. You may as well leave that portion out, since if you simply want increased beta/market exposure you can accomplish this more efficiently via buy-and-hold of SPY. It's brave to trade this. VIX has the potential to blow up in your face. By its nature, any algorithm that trades in and out of a single security is not going to have a trading sample statistically viable enough to to expect consistent out-of-sample performance. Basically, algorithms like this are largely overfit. This is gambling, and I believe the tail risk outweighs the reward. I hope you're not trading it on margin.

@Viridian,

From this we can deduce that the number of actually viable strategies
in the contest is significantly fewer than 38.

How did you come to this conclusion? In a recent post, Jess Stauth communicated that 23 individual community members have received capital allocation, and this is just in the first year.

"Has any information been published on how many people have gotten
allocations?
- to date we have made allocations to 23 individual community members from 9 countries (several individuals have received allocations for
more than 1 algorithm)."

She goes on to say that

"Looking forward, we aspire to scale our selection process up and
make many more allocations in the next 12 months."

I might be wrong but I'd say you're at least on the right track, especially if your algos' return streams are not too correlated (or negatively correlated) with any of the strategies that have already received allocations. The new contest has also only been running for 4 months, and I believe a minimum of 6 months worth of OOS performance data is required before a strategy is even eligible for an allocation.

Regarding this comment:

and it failed with a TimeoutException only a week or two after going
live.

If this happens again to a strategy that you believe in, rather than resubmitting it to the contest, you can ask Q Support to re-certify the strategy and restore your accumulated score (rather than start from 0 again).

23 individual community members from 9 countries (several individuals have received allocations for
more than 1 algorithm).

Somebody correct me if I'm wrong, but I think you're reading into that statement something that it doesn't say. I would assume the 23 figure refers to all-time, not just the past year. At first the allocation criteria was not as stringent. As a result, many of the early allocations performed miserably out-of-sample and were shut down.

@Viridian: Thanks for sharing your thoughts on the new contest. First off, I want to say that I think you’re right. The new contest is hard, especially if you compare it to the old format. However, the contest criteria reflect the first level of testing that our research team does when evaluating strategies for an allocation. The fact that the new criteria align more closely with the allocation process means you get feedback much more quickly on your algorithm’s eligibility to receive an allocation. This post is actually a great example because I can offer some feedback right now to (hopefully) help you make progress.

Before I address some of the specific points you raised, I want to mention that we have a Contest Tutorial which discusses the motivation behind each criterion, as well as suggestions on how to meet each one.


Here are my suggestions for solving some of the problems that you mentioned:

I've found I've sometimes needed to aim for a max gross exposure over 1.0 in order to not hit the minimum gross exposure, and sometimes it under leverages anyway.

The MaxGrossExposure constraint is unlikely to have much (or any) of an effect on the minimum capital that your algorithm spends. In general, to meet the minimum leverage requirement, you have to provide an objective function with a large set of stocks to order_optimal_portfolio. Passing an objective function to order_optimal_portfolio that covers a large set of assets (100 or more in most cases) will give the Optimize API an opportunity to find a portfolio that meets all of the supplied constraints while investing all of your algorithm’s available capital. I’m curious, how large of a set are you passing to opt.TargetWeights?

For net exposure I give the algo a tiny bit of leeway, in order that other constraints can hopefully be met.

These bounds seem a bit wide to me. Without knowing more about the strategy, my suggestion would be to tighten this up (maybe to +/- 0.01) and focus on other constraints + the objective. I typically use the DollarNeutral constraint which defaults to a much smaller limit.

Beta is the most unpredictable -- sometimes I'll set max beta to 0 and it'll hit .50 anyway. Actually I'm amazed using a backwards-looking indicator to optimize for future performance works as well as it does... So I wouldn't be surprised for it to fail at any moment out-of-sample.

You’re right about beta being the most unpredictable. You’re also right that the backwards-looking indicators not necessarily providing a good forecast of beta going forward. The suggestion in the contest tutorial is to make sure the factor you are using to build your Optimize objective is market neutral. Unfortunately, it’s not the most specific suggestion, but it’s really the best answer.

Would it be possible for Quantopian to use R^2 or machine learning to improve the predictive capabilities of this constraint?

This is an interesting idea but it’s not in our short term plans. However, I encourage you to work on that idea yourself and see if you can improve its predictive capabilities. An alternative solution to ‘use a market neutral factor’ would be to find a better forecaster of beta and supply that as a FactorExposure constraint to Optimize.

I would have thought PositionConcentration would only limit position sizes to a maximum specified, but it appears it creates an equal-weighted portfolio, with no concentrations smaller than the specified weight, which is not expected behavior.

The PositionConcentration constraint only controls the maximum absolution positions sizes. The objective will be the driver of position sizes when all constraints are met. Frequently, that leads to a portfolio like the one you described. I’d recommend going through this notebook that Scott posted about the Optimize API when it was initially released. In it, he discusses certain dynamics/behaviors that might help explain what’s going on in your algorithm.

Turnover has given me quite a bit of grief, because it depends largely on pipeline and market conditions. There's no MinTurnover, right? I'm not sure how you get turnover right without overfitting.

I would actually recommend removing the MaxTurnover constraint and instead try to control the turnover at the alpha factor research step or trade schedule step. If your portfolio isn’t placing bets frequently enough and you aren’t passing the lower turnover limit, you will have to adjust your factor to change more frequently. Your factor has to change often enough that your algorithm turns over an average of ~5% of its portfolio every day (works out to holding each stock for 40 days on average). If the issue is that your algorithm’s turnover is too high, I recommend lowering your trade frequency such that you hold positions for an average of 3 days.

And finally, even though I'm using QTradableStocksUS and rebalancing daily, sometimes my backtests fail the Tradable Universe criteria... Not sure how to fix that.

There’s an issue with the TargetWeights objective where a position that drops out of your objective function is not promptly closed out by the Optimizer. There was a workaround posted by another community member here. Alternatively, have you tried using the MaximizeAlpha objective? I’m wondering if that will help with the universe criteria as well as some of the others.


Lastly, regarding the TimeoutException. I’ve noticed this being reported by several contest participants. I don’t yet know what’s causing the problem, but if one of your contest submissions times out, please email in to [email protected] with the name of your entry and we can re-qualify it and allow it to pick up where it left off. We’ll have to figure out the problem as well as a fix, but re-qualifying the algorithm should at least allow you to recover your lost out-of-sample time.

Let me know if this helps or if you have any further questions.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Here's a code snippet, if you'd like to play around with the turnover constraint:

https://github.com/quantopian/research_public/blob/master/code_snippets/increasing_max_turnover_snippet

### This piece of code tries increasingly large turnover settings  
### until it finds one that yields a feasible portfolio. It's potentially  
### useful good if you have an algorithm whose turnover may be more variable.  
### Includes error handling for the case of order_optimal_portfolio not  
### executing due to too low of a turnover constraint.

### Reference:  
### https://www.quantopian.com/posts/party-algo-feedback-requested-please#5afaa20eab32870043944723

### Author: Grant Kiehne

# set context.init = False in initialize(context)

turnover = np.linspace(0.05,0.65,num=100)  
for max_turnover in turnover:  
    constraints.append(opt.MaxTurnover(max_turnover))  
    if context.init:  
        constraints = constraints[:-1]  
        context.init = False  
    try:  
        order_optimal_portfolio(  
            objective=objective,  
            constraints=constraints,  
            )  
        record(max_turnover = max_turnover)  
        return  
    except:  
        constraints = constraints[:-1]  

You can use it to find the minimum turnover that will satisfy the objective, subject to the constraints.

@VH,

They've only been a licensed hedge fund for just over a year so they wouldn't have been able to allocate capital to strategies before then.

We have learned a ton in the first 12 months of running a
crowd-sourced hedge fund - coincidentally today is the exact (!) 1
year anniversary of our launch.

That said, some of the strategies may be older than a year and a half, and none would be from the current contest, since a minimum of 6mo OOS performance is required.

What’s this? A certain ‘Viridian Hawk’ in 5th place in the contest... Well done mate!

Even more impressive given that the jump is from place 131 and a 0.0 score the previous day.

If it’s legit, and no volatility ‘spoofing’ is involved (there doesn’t appear to be as it’s not a very concentrated strategy), it’s super impressive! Did you get one requalified and score restored perhaps?

Thanks. On your tip I asked them to requalify my algo that failed a couple weeks in due to a timeout. I wish I'd done so sooner -- turns out I would have been raking in prize money all along. However, my contest algo is only doing so well due to a freak alignment of the stars. To my surprise it has performed better out-of-sample than in-sample and somehow hasn't veered outside of any of the criteria. It's a good example of an algo overfit to the risk criteria.

Thanks everybody for the responses to my questions. I'm taking it all in and trying to learn from it.

I'm still stuck on the QTradableUniverse problem when using with TargetWeights rebalanced daily. I tried the code snippet linked above, and maybe I used it wrong, but it appeared to have no effect. Is this a bug with the optimizer? Will it be fixed?

I will try using MaximizeAlpha instead, but this seems to be a different paradigm I'll need some time to wrap my head around.

Ok, 4th place now..

I tried using order_target( stock, 0 ) to close positions that are no longer in QTradableUniverse() that order_optimal_portfolio isn't clearing (but should be) and it causes the backtest to fail the "Uses Order Optimal Portfolio" criteria. :( I also tried explicitly setting the weight to 0 for positions that are no longer in the QTU, but this had no effect.

I think we need a way to clear positions that will otherwise disqualify our algos.

Here's a feature request. I'm not sure but it seems like if the order optimizer gave us an option to taper off position sizes as they approach the cutoff, wouldn't that dampen the effect of fringe noise and reduce the urgency of clearing positions as they are dropped from the QTU (since by that point they'll be so small any way)?

Hi Viridian Hawk -

Did you try Jamie's suggestion above of a potential workaround posted by Blue Seahawk? I have copied it below.

EDIT - Sorry, just noticed your comment above "I tried the code snippet linked above, and maybe I used it wrong, but it appeared to have no effect."

@ Jamie - "There’s an issue with the TargetWeights objective where a position that drops out of your objective function is not promptly closed out by the Optimizer." - As I understand, the issue was uncovered mid-February 2018; it is now almost July 2018. Is it that hard to fix?

    # Close any positions no longer in QTU list.  
    to_close = []   # securities to be changed  
    for security in context.portfolio.positions:  
        if data.can_trade(security):   continue   # delists only?  
        if security in context.stocks: continue   # assuming all are QTU  
        to_close.append(security)  
    try:  
        ids = order_optimal_portfolio(  
            objective   = opt.TargetWeights(  
                pd.Series(0, index = to_close)  # the 0 means close them  
            ),  
            constraints = [  
                opt.Frozen(  
                    set(context.portfolio.positions.keys()) - set(to_close)  
                )  
            ]  
        )  
        for i in ids:  # log any ordered to close  
            o = get_order(i)    # order object  
            s = o.sid   # including filled as a head's up in case partial  
            log.info('{} {} {}'.format(s.symbol, o.filled, o.amount))  
    except Exception as e:  
        log.info(e)  

Thanks, Grant. I'd tried throwing it in a scheduled function right before close. Looking closer at the code, I just realized the line labeled "# delists only? " didn't apply to this situation. Removing that line made it work. Wonderful!

Btw, what does context.init do in the other snippet you posted? It appears to always equal False?

@Grant: The fix to TargetWeights is not trivial. We haven't been able to prioritize solving it over other ongoing projects, which is why it hasn't been fixed yet. Unfortunately, I don't have a timeline on when that might happen. The solution originally authored by @James V (or the version later adapted by @Blue) serve as a possible workaround for the time being. Alternatively, using the MaximizeAlpha objective should work too, assuming you can express your portfolio transitions in such an objective.

@Viridian Hawk,

I was actually the author of the workaround code that was inspired by Blue Seahawk work on Frozen construct. Here's the actual code and commentary taken from this thread here:

code below is the temporary workaround fix. It is placed before the objective function TargetWeights to give precedence in closing newly dropped stocks from QTU list before being passed to Optimize API:

# Sell any positions in assets that are no longer in our target portfolio.  
for security in context.portfolio.positions:  
    if data.can_trade(security):  # Work around inability to sell de-listed stocks.  
        if security not in context.stocks: #QTradableStocksUS():  
            to_close = [security]   # securities to be changed  
            try:  
                ids = order_optimal_portfolio(  
                    objective   = opt.TargetWeights(  
                    pd.Series(0, index = to_close)  # the 0 means close them  
                ),  
                constraints = [  
                    opt.Frozen(  
                        set(context.portfolio.positions.keys()) - set(to_close)  
                    )  
                ]  
            )  
            #for i in ids:  # log any ordered to close  
                #o = get_order(i)    # order object  
                #s = o.sid   # including filled as a head's up in case partial  
                #log.info('{} {} {}'.format(s.symbol, o.filled, o.amount))  
            except Exception as e:  
                log.info(e)  

I said temporary workaround because in a long backtest run I encountered point (4) and received this error:

InfeasibleConstraints: The attempted optimization failed because no portfolio could be found that
satisfied all required constraints.

The following special portfolios were spot checked and found to be in violation
of at least one constraint:

Target Portfolio (as provided to TargetWeights):

Would violate Frozen([Equity(32770 [DEI]), Equity(16389 [NCR]), ...]) because:
New weight for Equity(32770 [DEI]) (0.0) would not equal old weight (-0.00124713995777).
New weight for Equity(16389 [NCR]) (0.0) would not equal old weight (0.00127471453249).
New weight for Equity(38921 [LEA]) (0.0) would not equal old weight (-0.0012673465914).
New weight for Equity(2 [ARNC]) (0.0) would not equal old weight (0.00124764231739).
New weight for Equity(6161 [PRGO]) (0.0) would not equal old weight (0.0012355562442).
New weight for Equity(4117 [JCI]) (0.0) would not equal old weight (0.00114328492707).
New weight for Equity(4118 [JCP]) (0.0) would not equal old weight (0.00120445963172).
New weight for Equity(2071 [D]) (0.0) would not equal old weight (0.00110161134441).
New weight for Equity(4120 [JEC]) (0.0) would not equal old weight (0.0011990730669).
New weight for Equity(14372 [EIX]) (0.0) would not equal old weight (-0.00124497410696).
... (841 more)

To make the backtest continue to the end, I added " except Exception as e: log.info(e) " , to ignore error and just log it.

I just want you to be aware that the workaround is NOT a guaranteed fix as shown above and as Jamie said there is a deeper problem with the TargetWeight construct that Q engineering is aware of and presumbably working on a permanent fix.

Dynamic PositionConcentration with MaximizeAlpha -- more stocks in portfolio
When using MaximizeAlpha I sometimes set PositionConcentration limit dynamically along this line (probably not perfect yet). Yesterday for example in a cloned algo with ~50 each long, short, with this change became ~1000 each so then I could use percentile_between for just the stronger alpha signals to reduce that to, say, 200 each, presumably the best of each bunch, long & short.
Seems MaximizeAlpha uses the PositionConcentration values until it runs out of room and ignores the rest. This may allow all stocks in (or closer). I don't know how different this is from TargetWeights, not entirely sure in what way MaximizeAlpha is looking to maximize alpha.

    conc = 1.0 / len(alpha)  
    order_optimal_portfolio(  
        objective = opt.MaximizeAlpha( alpha ),  
        constraints = [  
            opt.PositionConcentration.with_equal_bounds( -conc, conc ),  
            [...]  

@Blue,

Nice one, I'll give it a try!

@ Viridian Hawk -

It should read:

# set context.init = True in initialize(context)  

This way, the turnover constraint is dropped if it is the initial order. Then, context.init = False and the code keeps trying until a turnover level is reached that will work. I just noticed that stylistically this is a bit ugly, since the first turnover value is skipped in the loop, but functionally it won't matter.

@ Jamie - Thanks for the feedback. Seems like a pretty gnarly bug if one can't exit positions. Aren't you worried about this creeping into live, real-money algos? Or is your thinking that you won't deploy real-money algos yet with TargetWeights, until the bug is fixed? And it'll take 6 months of out-of-sample time for algos using TargetWeights to be eligible for the fund. So, I guess you are saying don't use TargetWeights? It just seems like providing a switch to use order_target_percent would do the trick. What am I missing?

For example:

order_optimal_portfolio(  
        objective = opt.TargetWeights(alpha),  
        constraints = [  
            opt.OrderTargetPercent(),  
            ]  
)

If other constraints are provided, then you could just generate an exception. Would this work? Or is there something I'm not appreciating?

Interestingly, the James Villa/Blue Seahawk code to close out non-QTU positions reduced the sharpe of the algo I was working on from 1.07 (not great) to 0.24 (terrible). In addition, prior to the code my algo's returns were all "specific returns" but with it my algo's returns became all "common returns". Wouldn't expect getting rid of a non QTU stock here and there every few days to make such a huge change in algo performance characteristics.

The fix to TargetWeights is not trivial.

@ Jamie -

Perhaps you could ask Scott S. to provide a synopsis of the problem, and why a solution is not so easy? Or if you understand it, a few more details would be nice.

Hello all,

I still want to comment on this thread more generally with high level thoughts, but haven't had the time recently. I did want to point out that we're trying to provide more opportunities for everyone to get feedback on their strategies so they don't get stuck.

https://www.quantopian.com/posts/tearsheet-feedback-thread

Please take a look and let us know if you have tearsheets to review. We're hoping to get some good submissions so that Jess can provide feedback in a webinar.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Here's my contest entry that's currently in fifth place. WHat? How?!
(Please note this strategy isn't my best idea ever!) When the new contest was introduced this was my only strategy that I could twist into meeting all the entry criteria. So I entered simply to have some skin in the game. I think this can be read in one of two ways. The first is encouraging -- the competition you're up against isn't so hot, so anybody has a chance to get on the leaderboard. Or, the other reading is that the contest's rating system is wack.
Sure, I don't anticipate this algorithm staying in 5th place for long. My algorithm's returns are mostly a sideways random walk. Right now it's been lucky, but inevitably it'll be unlucky. Still, it hasn't been that lucky. Out of sample performance has been nothing to write home about. So what's the explanation here for the high rating?

@Viridian Hawk,

To try to answer your question about your current ranking, I would say that your OOS live volatility is low, less than 2% if calculated based on 63 day rolling average and therefore pegged at 2%, the minimum. So your cummulative live returns should be around 1.5% , thus your current score of 0.752. Your live is currently outperforming your backtest results. Could be luck or something else.

@Viridan Hawk: Contest scores are floored at 0, so your score is derived from the low point in the OOS period in late April/early May (just eyeballing your chart) until today. It's allowed to keep running in the contest since it's returns are positive, looking back to 2 years before submission.

If the issue is that your algorithm’s turnover is too high, I recommend lowering your trade frequency such that you hold positions for an average of 3 days.

how do I set the trading frequency to every 3 days?

I am experiencing high turnover when i have the following:

   # Schedule our rebalance function  
    algo.schedule_function(func=rebalance,  
                           date_rule=algo.date_rules.every_day(),  
                           time_rule=algo.time_rules.market_open(hours=0, minutes=30),  
                           half_days=True)  

One way to trade every 3 days:

def initialize(context):  
    context.days = 2  # starting at 2, to become 3, to trade on first day of backtest

def before_trading_start(context, data):  
    context.days += 1 # increment counter

def rebalance(context, data):  
    if context.days % 3 != 0: # if not evenly divisible by 3, modulus operator  
        return                # skip  

@Blue

Awesome! Thanks!