I've been taking a hiatus from Q. In the interim I've been doing a lot of manual trading, learning more and more about markets. I don't feel any closer to a successful Quantopian algorithm. The more I know, the less it seems possible.
I've only made one attempt so far at the current contest, and meeting the risk constraints sucked all the alpha out of my strategy, and it failed with a
TimeoutException
only a week or two after going live. Despite that, I am still in 38th place. Let that sink in... From this we can deduce that the number of actually viable strategies in the contest is significantly fewer than 38. Maybe the best entries are really good and make up for the fact that there are only a handful of them... Or maybe not. There's probably a reason why Q no longer shares the performance stats of the contest entries.
I have my opinions -- I think some of the contest criteria are counter-productive. For example, real quant funds scale their leverage up and down. Quantopian requires every single algorithm to more or less maximize its buying power regardless of market opportunity. I think this is a classic example of the folly of letting the marketing department tell the engineering department what features the product needs to have.
Moreover, I'm a bit skeptical of the Q fund premise -- that there is some simple, consistent, alpha-generating, risk-neutral statistical arbitrage edge left in the market. What are the handful of people developing algorithms on Quantopian likely to discover within the constraints of such an extremely limited platform that a real Quant firm without our limitations can't have found faster and better by utilizing automated discovery and optimization of parameters, such as via machine learning, not to mention access to expensive data sets.
Anyways, I am trying to have another go at the contest. But meeting the requirements is really hard. And if discovering institutional-level alpha wasn't hard enough, getting order_optimal_portfolio
to behave is causing me nonstop grief.
What I find myself doing is running a backtest, tweaking the different constraints to try to nudge the optimizer into actually meeting the required constraints, and repeating that 20 times until it finally passes (and hopefully hasn't destroyed all the alpha in the process). Obviously this is not ideal. Not only is it an excruciatingly slow process, but it also means I'm simply overfitting my optimizer settings to Q's risk constraints. Since it's an overfit, the algo is likely to fail the contest requirements shortly after it goes live out-of-sample.
Does this sound right? Is nobody else having trouble with the order optimizer and getting it to do what you're telling it to? I realize that as prices move they can shift outside of the constraints, but why doesn't the order optimizer help more with this? Why doesn't it take extra measures to shift the portfolio within the requirements as these problems develop? We're forced to use the order optimizer, and so we have little recourse when it doesn't do what we want it to do.
Here's an example of the code I'm using:
algo.order_optimal_portfolio(
objective=opt.TargetWeights( (context.output.weights) ),
constraints=[
opt.MaxGrossExposure( 1.04 ),
opt.NetExposure( -0.08, 0.07 ),
opt.FactorExposure(
context.outut[['beta']],
min_exposures={'beta': -0.25},
max_exposures={'beta': 0.2} ),
opt.experimental.RiskModelExposure(
risk_model_loadings=context.risk_loading_pipeline,
version=opt.Newest ),
opt.PositionConcentration.with_equal_bounds( -0.045, 0.045 ),
opt.MaxTurnover( 0.65 ),
],
)
Does this look right? Like I said, I typically tweak all the values until I find something that works. These happen to be the settings for the last successful backtest I ran, but it's always different. It always requires quite a bit of nudging. I've found I've sometimes needed to aim for a max gross exposure over 1.0 in order to not hit the minimum gross exposure, and sometimes it under leverages anyway. For net exposure I give the algo a tiny bit of leeway, in order that other constraints can hopefully be met. Is that the right approach? Beta is the most unpredictable -- sometimes I'll set max beta to 0 and it'll hit .50 anyway. Actually I'm amazed using a backwards-looking indicator to optimize for future performance works as well as it does... So I wouldn't be surprised for it to fail at any moment out-of-sample. Would it be possible for Quantopian to use R^2 or machine learning to improve the predictive capabilities of this constraint? RiskModelExposure
for sectors actually works reliably. I would have thought PositionConcentration
would only limit position sizes to a maximum specified, but it appears it creates an equal-weighted portfolio, with no concentrations smaller than the specified weight, which is not expected behavior. So even though I'm passing a bunch more stocks via the objective, it's tossing most of them instead of creating smaller positions. Am I correct on this? Turnover has given me quite a bit of grief, because it depends largely on pipeline and market conditions. There's no MinTurnover
, right? I'm not sure how you get turnover right without overfitting. And finally, even though I'm using QTradableStocksUS and rebalancing daily, sometimes my backtests fail the Tradable Universe criteria... Not sure how to fix that.
Any tips on avoiding risk constraint overfitting? Any tips on getting the order optimizer to play nice?