Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Getting an Allocation, June 2017 Update

When we started making allocations in April, we also announced our goal of making allocations up to $50 million to a single algorithm by the end of the year. What would a $50 million dollar allocation mean to an author?

An author's royalty payments essentially depend on three factors: 1) the size of their allocation 2) the performance of their algorithm and 3) the fraction of the net profit that the author receives as a royalty payment.

I'll give you a completely hypothetical, back-of-the-envelope example answer. Please understand that this is for illustration purposes only and that the actual details of any future allocation will vary. The details of the calculation of net profit and the payment schedule, which are included in our author licensing agreement, are not covered in this simple example.

  • allocation received: $50 million trading allocation on January 1 (that's gross market value, which includes leverage)
  • algorithm net profit: $1.5 million through December 31 of the same year (reflects a 3% annual return on gross market value)
  • author's share of net profit: 10%
  • author's annualized royalty payment: $150,000 USD

Keep in mind that our allocation process includes a 6 month out-of-sample evaluation period. When we make allocations in January 2018, many of the algorithms will have been written during this month of June. All you have to do is run a backtest or enter a contest. When you do, we store a snapshot of your code and evaluate the performance of those snapshots in 6 months, making you eligible for an allocation. As always, we don’t look at your code during our evaluation process; instead, we look only at the algorithm’s simulation exhaust.

The $50 million dollar question, then, is how can you, a Quantopian community member, dramatically improve your chances of receiving an allocation? This post is here to help, expanding on what you'll read on the allocation page.

1. Seek Alpha While Managing Your Risk Exposures

That 7-word title packs in a lot of meaning and is worth reading again once or twice. Our selection criteria excludes the common risk exposures, including market beta, sector risk, the Fama-French factors, and more. We are looking for algorithms that are profitable while minimizing their exposure to these common risk factors. If your strategy performs well, but has high exposure to common factors, then it’s not really doing anything new and will not be as attractive. When you run a tearsheet on your algorithm, that tearsheet will show you your exposure to many of the common risk factors.

I can offer a shortcut of sorts: Consider writing an algorithm that finds its alpha in one of the data sets that you find on the data page. Those data sets don't generally depend on stock price and volume, and the alpha you find there is more likely to be free of factor risk. Perhaps more importantly, there are fewer people constantly surveying these data sets and trying to come up with trading signals. Data sets like price are so exhausted at this point that it is very difficult to come up with a model that forecasts returns. Newer data sets or data that gets at a novel way of forecasting returns will meaningfully increase your likelihood of finding a tradeable signal. Note that you don't have to buy a data set in order to find alpha, or in order to be eligible for an allocation. Every data set offered on Quantopian comes with a substantial amount of free sample data - if you build a great algo using free sample data our selection process will identify it, and we will validate the out of sample performance on our internal infrastructure.

Another way to avoid common risk factors, particularly relating to equities, is to write an algorithm that trades futures. We haven't made any allocations using futures yet, but we look forward to doing just that in a few months.

Regardless of whether your algorithm finds alpha in price data or our other data sets, it will still have to minimize its exposures to the common risk factors.

Learn more about using alternative data and futures in these posts.

2. Problems that Prevent Allocations

After reviewing literally millions of algorithms (thankfully, with the assistance of good automation!), we have compiled a short list of common mistakes that ordinarily take an algorithm out of consideration for a sizable allocation. When we find common mistakes, we start teaching the community how to avoid them.

For these most common mistakes, you should watch this QuantCon talk, delivered by Jess Stauth and co-written by Delaney. The presentation takes 24 minutes, with 12 minutes of Q&A at the end. This is what you should take away from the presentation:

Overfitting: Overfitting is a real challenge because you can never be sure you’ve avoided it. Your best test of overfitting is to apply your predictions on new (out of sample) data, which often means you must wait for time to pass. Look at slide 8 - that algorithm looks great at first, but in slide 9 you can see the performance collapses once it goes out of sample. Slides 11 and 17, though, you can see keep performing. If you spend an hour listening to The Dangers of Overfitting you'll be in a good position to avoid this mistake.

Long the Market: This one is seductive, but relatively easy to stamp out. The market has historically mostly gone up; to take advantage of this you can just buy an index or basket of long stocks. Vanguard provides this service with an expense ratio of .04% (as of Jun-09-2017). Neither Quantopian, nor Quantopian's clients, wants to pay 10% of the returns for a beta that can be bought much cheaper elsewhere! Look at slide 12 and you'll see an algorithm that is too long, but slide 19 has an algo that is not long the market. Your algorithm should be market neutral, with equal weights long and short on the exposure chart.

Beta to the Market: Market beta is very similar to being long the market in that you end up exposed to market movements, and that is very cheap to find. The difference is that you can unintentionally have market beta emerge even in strategies that are equally long and short. Systemic effects from your forecasting models can cause you to select stocks in a way that subtly increases your market beta. We provide tools to check for this. Slides 12 and 16 show tearsheets that have uncontrolled beta, but the tearsheet on slide 18 has essentially reduced their beta exposure. The beta hedging lecture is good for learning about controlling your beta

Liquidity Risk (aka not using the Q1500US: We see a lot of algorithms that look good at small allocations but fall apart when the allocation gets bigger. It's one thing to put $10k into an illiquid stock, but it's another to put in $100k, let alone $1m. The large orders move the price dramatically, or are totally unfillable, even when you use state-of-the-art execution algorithms like we do. We cover this in detail in our lecture on slippage. Jess also covers this point starting at 18:10 of her talk. One of the best ways to avoid this problem is to use the Q1500US. The Q1500US is a dynamic universe that contains only stocks that can handle large orders with relatively small price impact. Setting this universe before doing any kind of research or analysis can save you a lot of time.

Too Few Positions, or Excessive Exposure to a Single Stock: When a portfolio has a heavy weight in a stock, the portfolio takes a concentrated risk. An unexpected merger or acquisition, corporate fraud, or even a bad quarter can cause a steep drop in the portfolio value. We are looking for portfolios that avoid undue concentration risk and minimize their chances of having a big drawdown. One great way to guard against large drawdowns is to avoid letting your portfolio become too highly exposed to a single stock. A portfolio that holds hundreds to thousands of individual stocks will be well diversified with respect to outlier events that affect a single stock. The concentration risk lecture gives a real world example of the benefits of diversification across many holdings. Simply put, if you really think you can systematically beat the market, then you want to place as many bets as possible in an effort to do so. See the lecture for more information.

Sector Exposure: The US stock market can be broadly divided into a number of sub-groups of similar types of companies, referred to as ‘sectors,’ whose stock price returns tend to experience a tighter relationship to each other than to the rest of the broad market. Quantopian’s sector classification (sourced through Morningstar) classifies stocks into one of 11 sectors (e.g. Technology, Energy, Healthcare, etc). When building a market neutral trading algorithm, it is important to look for unintended exposures across these sectors. Some algorithms may be intended by design to be applied to only one or a few sectors, where other algorithms can be applied in a similar fashion across all sectors. In either case, the key is to study and control the net exposure (long or short) that your strategy can take on in any single sector. Algorithms which maintain a very low net exposure across all sectors, avoiding a large long or short bet within a sector, will be eligible for larger allocations in our selection process. Here is an example algorithm that uses optimize to constrain sector exposure.

Inappropriate Risk-Driven ETFs: Different ETFs are constructed to be different risk packages. They may be leveraged 3 times, creating excessive single name risk (and high fees!). Other ETFs track risk indices like VIX, and have a history of market pricing failures. Quantopian doesn't give allocations to algorithms that are based on ETF risks like these. It is possible to use some ETFs in thoughtful ways, but only with careful risk controls.

Summary

In his book Inside the Black Box, Rishi Narang makes this pithy comment on risk: "So the key to understanding risk exposures as they relate to quant trading strategies is that risk exposures are those that are not intentionally sought out by the nature of whatever forecast the quant is making in the alpha model."

Narang’s insight summarizes everything I've discussed so far. When your algorithm is making money, you have to understand why it is making money. Is it riding a single hot sector, or a Fama-French factor, or the market as a whole? In order to construct a good algorithm, you need to know where the alpha comes from.

We’re making allocations already, and we plan on making more of them, and larger, as the year goes along. If you can find alpha while managing your risk, you could get one of them.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

23 responses

Excellent post. Is it worth mentioning breadth?

I cannot find the PnL Attribution in my Pyfolio sheet. Perhaps I am not using the latest version?

It would be great if the datasets that are now interactive/Notebook-only are made available for Algo Pipeline....

Hi Dan -

Nice summary. I'd also refer folks to A Professional Quant Equity Workflow. Although there is some discussion of workflow in the blog post, it is really an architecture for the type of algorithm you'd like to fund (or rather your current customer(s) and the general market would fund, since as I understand, you're the middle man is this enterprise, not the end source of capital).

As a template, I'd emphasize that the sample lecture algo is state-of-the-art (the template is also released on github). In particular, I've found the implementation of the Optimize API to be useful, since it automatically re-jiggers weightings to be more in line with the allocation requirements, and eliminates delusional conclusions about the goodness of the algo.

The other comment I'd make, to avoid over-fitting, is to run backtests back as far as the data will allow (and the Q platform will support, due to memory limitations, which can still creep in, even with more memory). One clue to over-fitting is huge draw-downs or other ugliness over the long backtest compared to a short one.

A nitty-gritty technical question is whether Quantopian has evidence that the alternative data sets offered, and futures are significantly uncorrelated with your traditional minutely/daily OHLCV bar data and daily fundamentals? Without some quantitative analyses, I'm wondering if there is a systematic "the grass is greener on the other side of the fence" risk here? In the lingo of the recent (fascinating) book Black Edge: Inside Information, Dirty Money, and the Quest to Bring Down the Most Wanted Man on Wall Street, why would you think there is 'gray edge' in those data sets?

Hi Grant,

Thanks for the additional pointers. Another good practice in backtesting is to use a small sample of history, say a year, during algo development/debug/refinement. This way, you can reserve the full history for confirmation of your investment thesis. We will still require a significant walk forward test for true out of sample, but you will have a better sense of your algo's efficacy.

We can't say for sure that a specific dataset has alpha until we select algorithms using the data and trade them. However, based on the data-driven algos we are seeing in our selection process, we think your chances for selection are better with data beyond price and fundamentals. That's why we want to encourage our community to fan out and search the available data for alpha.

You can explore the potential of a dataset in our research environment using Alphalens (step 2 in the pro quant workflow, "alpha discovery").

happy researching,
fawce

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Dan: Thanks so much for this post. You summarize a lot of lessons that I found hard-learned.

Obviously the contest is just to get people interested in Quantopian, and these allocations are the real goal. Still, the contest rules helped me learn some of the lessons in your list, such as minimizing beta exposure and the fact that the contest does not permit leveraged ETFs. That made me ask "why does the contest not permit leveraged ETFs", and ultimately prevented me from wasting time on some unworkable leveraged ETF strategy.

In the future, I could see a few ways to encourage people to develop allocation-worthy algorithms by tweaking the contest rules. Basically, I would just adjust the badges:
- The "hedged" badge is not nearly strong enough. For allocations you require algorithms to be approximately dollar neutral. Weird people can still work around this, but who cares about them.
- The "positive returns" badge is taken care of by four of the metrics: returns, Sharpe, Sortino, and drawdown. Why not replace "positive returns" with "Q1500US", or another qualitative property you care about? Low sector exposure, low position concentration, uses pipeline, and generally "is tradable" come to mind.

My algorithm is at place #6 in competition #27 now, but unfortunately I know it is not eligible for an allocation, because it is not dollar neutral and trades esoteric ETFs. My focus is on allocations now, so in a way I don't care about the competitions. On the other hand, I might already have an allocation-worthy algorithm if I didn't spend so much time developing algorithms that were untradable for the above reasons.

Thanks again,

Doug

Awesome post!

Hey Grant, great points and thanks for pointing out our flagship template strategy. I think you may have posted a private strategy link so I'm putting our official lecture series link up here:

https://www.quantopian.com/lectures#Example:-Long-Short-Equity-Algorithm

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

@Tim try:

bt.create_full_tear_sheet(round_trips=True,hide_positions=False)  

@Dan Dunn, I liked Jessica Stauth's presentation, she's such a great speaker and most of the questions that I had regarding Q allocations are well addressed there. I have few questions left, if you don't mind answering those.

  • I found interesting that all the algorithms presented there were analyzed starting from 2010 only. Do you look at previous periods too? Aren't you interested on algorithm performance during the GFC? In my experience it is easier to find algorithms that perform well after 2010 but that did poorly just before that date.

  • (related to the previous question) Since Q aims to have a pools of algorithms were periodically some entries leave (once they stop performing consistently with expectations) and some new entries come, maybe it is better to consider relatively recent time frame when analyzing algorithm performance, as you don't need to stick with an algorithm forever, in all market regimes, and you can drop it as soon as its performance degrades. Do you agree?

  • I believe the benchmark used in the presentation is SPY. My understanding is that an algorithm that suits Q hedge fund doesn't necessarily have to beat the market, that's not the point of an hedge found. So when comparing a hedged algorithm to SPY along a time period like 2010-2017, were the market performed pretty well, it is fine to expect algorithm performance "worse" (smaller returns) than the market. Am I correct?

How important is absolute returns in the evaluation, if other metrics are good enough for the fund?

At Dan's example of 3% returns, a $50 million allocation makes $1.5 million, where the author earns $150k, and Quantopian earns another $150k. I'm assuming there's a 2% management fee too, and that's another $1 million for Quantopian if no leverage is used. The numbers used are to my best understanding (20+2 fees), please correct me if it is wrong.

In this case, only $200k of $1.5 million gains will be left for the investor. That's 0.4% return a year. In my opinion, only having 13.3% of the returns while shouldering the risks isn't a wise thing to do.

My question is, is a rock solid, beta/sector/dollar neutral strategy with 1.5 sharpe and 3% absolute returns good enough, or do we need to target higher returns for our algos, say 7 - 10%?

If 5x leverage is employed in the fund, is the management fee calculated based on the non-leveraged or leveraged value?

One confusion for me is that there has been emphasis on combining many alpha factors (e.g. the defunct 101 Alphas Project, the blog workflow post, the ML effort, and a recent best-practices example). Also, the Alpha Vertex folks claim to be following the same script of drawing from various data sources, including "price, company fundamentals, technical indicators, geopolitical, macroeconomic and news events" (see post). So, presumably, the best shot at Q funding would be with a multi-factor algo utilizing a variety of factors, based on OHLCV bars, fundamental data, and more off-beat, non-traditional data.

So, I'm wondering what you are looking for? If I construct a multi-factor algo that contains factors that utilize OHLCV bars and fundamental data, will it be penalized over single-factor or multi-factor ones that exclude such traditional data sets? And supposing that I do include a mix of traditional and novel data sets (e.g. see this example where I include a StockTwits-based factor combined with some traditional factors)--would it be necessary to unravel point-in-time the contributions of the various factors, after alpha combination (potentially with ML), and the optimization/risk management steps? Would it be necessary for me to break out the returns by traditional versus non-traditional factors? I'm not sure how to do that.

Basically, are you now penalizing algos that use factors based on OHLCV bars and fundamentals? Or are you encouraging folks to mix in some factors based on non-traditional data? Or are you looking for single-factor algos, that access one of your non-traditional data sets, so you can combine them with algos you already have in your portfolio that presumably are largely based on OHLCV bars and fundamentals?

Also, perhaps outside of the scope here, but it would be interesting to understand why futures are so attractive? They've been around a long time, right? And presumably lots of hedge funds trade in them. So, intuitively, they don't sound novel and un-tapped (although I suppose one could argue that unleashing the global Q crowd will reveal unknown inefficiencies).

@Kayden I suspect you won't get an answer, as the fees will be confidential and probably bespoke to each client. I think Dan is giving an "underpromise and overdeliver" answer for authors. Don't forget the investors in the fund will not just get one algo but a portfolio of them, which will increase Sharpe, and improve the return situation, especially with a bit of leverage.

I'm wondering if there might be some wiggle room in the Q licensing of data sets, so users could get feedback up to the present. For example, Contract Win Data has free data availability over 01 Jan 2007 - 16 Jun 2015. So, if I construct an algo using these data, I have decent coverage looking back, but forward, I've constrained my algo development to mid-2015 (and I can't enter the contest). It seems like some sort of feedback could be provided, beyond the 16 Jun 2015. For example, I could imagine submitting the algo, and getting back a tear sheet that would give enough information to get a sense if the algo has legs, but not so much information that it would infringe upon the licensing agreement with your vendor. Even a high-level score of 1-10, or a red/yellow/green indicator would be better than nothing.

Another possible approach would be simply to leverage your automated algo evaluation system, and provide users with a summary, after the 6-month out-of-sample period has passed. This would not be optimal, since there would be a 6-month lag in feedback, but it would be better than no feedback at all.

Perhaps you could sort out how to allow entry into the contest, but provide less information to the entrants (e.g. hide the backtest, trades, etc.). You'd probably have to constrain the number of contest starts/stops per unit time, but otherwise it seems like it might work.

Any thoughts? It is hard to get motivated to apply data that is missing the most recent 2 years--the time frame that would presumably carry the most weight in your decision to fund the algo! In the absence of any out-of-sample feedback whatsoever, in my mind, it is kinda hard to justify the effort. I suppose one could consider the 2 years a hold-out period (as Fawce mentions above), but then not being able to access the hold-out data set is problematic (or maybe a blessing in disguise, to avoid over-fitting). But perhaps you are seeing lots of good algos come through that have been developed with the authors "flying blind" and my concern is not justified?

Deleted post mentioned by Dan Dunn below, to avoid Alice in Wonderland diversion.

@Luca For the purposes of this evaluation process we look at tear sheets that start at 2010. If the algorithm passes this screen, it is evaluated on other time frames as well.

We definitely agree that algorithms generally decay over time, and when an algorithm gets an allocation, it is not permanent. When the decay becomes problematic the allocation is reduced or ended entirely. That evaluation process is another post all by itself, maybe someday in the future.

That is correct, it's fine that an algo's return is less than SPY. If the sharpe is high enough, it's plenty interesting even with below-SPY returns.

@Kayden You're asking questions that are difficult for me to answer in this environment. That said, I can make two general clarifying points. 1) Leverage is used. 2) Industry practice is to charge a management fee on the unlevered investment.

So, yes, 3% absolute returns would generally be good enough.

@Grant You ask what we want, beyond what was described in the previous 2000 words. Let me try an analogy to bring the point home: Quantopian gives allocations to people who build nice houses. We want the house to provide shelter, and we want it to be safe - safe from rain, hot days, cold days, and even floods and earthquakes. It doesn't have to be an architectural wonder, just cozy and safe. We check the house carefully, make sure it has good design, a good foundation, and is built from high-quality materials. Unfortunately we find a lot of houses that look good but have a fatal flaw - a leaky skylight, or a rotten beam, or are built on a swamp.

In that analogy, you're asking us if we like houses built with wood, or with a flat roof, or steel frame, and do we want vinyl siding or wood. There's no right answer there - it all depends on how you use the materials.

We are saying, however, that we are partial to brick houses (alternative data). It still has to be a well-constructed house, but we think brick is nice. We also continue to give allocations to all sorts of different houses, because we love variety. But brick is particularly nice.

Why futures, you ask? Because we like allocating to a variety of strategies, and futures are part of the variety we seek.

@Grant2 Your point is well taken. Some of the data sets have restrictive time frames that are a drag on their implementation.

@Grant I'd rather not follow these threads down a rabbit hole, so perhaps you can create a new topic or email me privately with follow up.

@Burrito Dan: Many thanks, Dan! This option does indeed result in additional information being displayed. Unfortnately, the information on the use of the Q1500US stocks is still not there.

I would like to access and research the data sources, but the free versions are too limited to try (backtesting en forwardtesting) and paying for access, just to find alpha for an allocation is not my thing. Maybe it's better to only charge for the use of those sources when someone uses it for their own trading accounts?

@Quantopian, do you have any quantitative metrics on risk targets? I know you want a beta of 0, dollar neutrality, and sector neutrality. Beta seems to be the most important because of how the contest is structured (absolute value of less than 0.3); but are there any tolerance values to suggest for dollar neutrality and sector neutrality?

I'm putting in a couple contest entries that are using the optimization API and I'm looking for some guidance on what min/maxes I should provide as constraints to be considered for allocation. I've put in constraints of +/- 0.01 beta, +/- 0.01 sector neutrality, and +/- 0.1 dollar neutrality. Are these too conservative or too aggressive or about right?

Dan, I have a question with liquidity risk. In your initial post you mentioned "It's one thing to put $10k into an illiquid stock, but it's another to put in $100k, let alone $1m". I have been going through a couple of examples that have been posted for the optimal portfolio that use about 0.5% maximum position concentration for a stock. I ran some numbers on a stress test involving 50m capital, a 0.5% position would be 250k. You mentioned that "Liquidity Risk (aka not using the Q1500US" implies if we use Q1500US we won't have liquidity risk, but a lot of stocks in Q1500US are small cap less than a Billion market cap. It appears to me that statement "Liquidity Risk (aka not using the Q1500US" only holds if we backtest with 1m or less or maybe 5m or less and would not hold for a backtest with like 50m, considering we might end up taking a 250k position in a stock within Q1500US that has less than a billion in market cap. Are we supposed to somehow account for this in our algorithm with a filter on market cap based on how much capital is being used. Something along the lines of

If capital less than 1m (Q1500US)
If capital greater than 1m and less than 10m ( Q1500 and Market cap greater than 1B)
If capital greater than 10m and less than 50m ( Q1500 and Market cap greater than 2B)

Please let me know if we are supposed to use these kind of additional filters on top of Q1500US.

What leverage should we use for algorithms we want considered for an allocation? The obvious options would be:

  • Use 1.0 gross leverage, because you would leverage it as appropriate on your end anyway, depending on the risk metrics.
  • Use whatever leverage results in the most balanced return and risk metrics, but keep the gross leverage e.g. below 3.0.

Hi,

I want to understand what turnover is appropriate for an allocation. I have 2 algorithms that have very high Sharpe with a very high turnover. The Sharpe comes down if I impose a constraint on turnover. My question is for maximum allocation what is the typical turnover constraint I should set in my algorithm?

Best regards,
Pravin

Super, thank you!