Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Seeking honest criticism/suggestions about asset choice and method

Hello,

To shed some context before I go further, I have not yet graduated college but have managed to save enough capital to bypass day-trading rules (enough that a series of moderate drawdowns wouldn't wipe out enough capital to put me below requirements) and have some live discretionary option trading experience (turned profit but only beat benchmark one single year). I have some prior experience building algorithms from my time on Rizm and Quantconnect but have not yet live traded any algorithm so far although the backtests showed potential.

I plan to double major in mathematics and computer science in the hopes that whether or not I end up working for a firm, I can at least learn skills to improve my chances of building a benchmark beating investment that I have full control over.

My goal is to create an algorithm with a sharpe of at least 2.0 (I have managed to reach 1.6 with an algorithm before) that will trade UPRO or SPXL and SPXU so that it can be run 'long-only' through Robinhood to avoid commissions. I would like to put an initial $40,000 into this algorithm.

The idea sounds good to me but I am also pretty young and not the most experienced camper out here.
I'm under the belief that I can accomplish this within the next 2 years but I am aware that I could be severely underestimating the difficulty.

Could someone with experience chime in and maybe throw me a hard knock if I need it?

13 responses

I've been using Quantopian for about a year and a half, live trading for 1 year. The best advice I can give is "Don't trust your backtests". I could go on all day about the minutiae of live trading with quantopian but really it comes down to these factors:

  1. Backtests lie: Hindsight/overfitting bias is nearly impossible to get rid of. Live trading results going forwards must be frequently checked vs backtests to ensure that your algorithm is still performing as expected. Any algorithm that has predefined variables is a potential for overfitting. The best thing you can do is put the numbers into the variables that "make sense" based on sound logic and if your returns aren't what you expected then don't tweak the numbers too much. You can also do a sensitivity analysis. If using a moving average period 20 is great returns and using period 19 or 21 is bad then DO NOT USE THAT ALGORITHM, the moment the market shifts (probably tomorrow) your returns will go to shite. Also backtests use a model for slippage that doesn't mirror reality. Quantopian's data source is minute bars but doesn't include any information about bid/ask price or how the price is moving within that minute. This isn't a huge problem for highly liquid stocks like SPXL but any stock that's not in the Q1500 universe or for thinly traded ETF's you can throw that backtest out completely (you can still profit but just don't expect to get backtested results).
  2. Make your algorithm robust, it needs to be able to be stopped and started at any time without causing a glitch. For example: orders normally execute at market open, what happens if you need to restart your algo at noon? Entering orders tend to be finicky on Robinhood, particularly in the morning. your algo needs to detect rejected orders and resubmit them. Using limit orders is prefered to avoid bad fills but even more finicky than market orders. Rarely quantopian will go slow and take longer than 50 seconds to run some code, your algo will error out and will need to be restarted. Every weekend on sunday night I need to reconnect (re-logon) to robinhood with my algorithms. I also check every morning prior to market open because rarely it will disconnect during the week.
  3. Leverage needs to be strictly controlled. I've never had success using order_target_percent() you want to have a function to manually calculate exactly how you want to submit your orders every time, always sell before buying and wait for open orders to clear. Robinhood rejects any order that goes over your cash value (max leverage = 1.0).

Also I wouldn't count on ever getting a sharpe = 2.0 algorithm in real-life trading. Especially with UPRO/SPXL/SPXU/VXX/XIV/JNUG/DUST/TVIX yadda yadda yadda. Your risk tolerance might be high but big drawdowns will destroy your sharpe and your account value. I personally don't trade any algorithm that has more than 15% drawdown in a backtest because that will probably be a 20% drawdown in real money trading, then it's impossible to tell if your algorithm is failing because of overfitting or if it will recover. By the time you figure it out you could decimate your account.

Good luck

I really appreciate the immediate help,
I have some questions/comments if you or someone else cares to address them

  1. I was aware that historical bias and curvefitting are sinister threats to the algo trader but beyond swapping constants for dynamic values as paramteters and backing changes with theory, is there anything else I can do? Also, I actually just sent a question to support on what Quantopian's historical prices are based on right before I checked this haha. Is there a way to check/use historical bid/ask? I assume they would have that data at least for pay-to-play usage, hopefully for free.
  2. I'm not too familiar with order execution involving automation but now you've riled up my curiosity. I'll look into dealing with bad/rejected order. Am I okay to presume that overnight holding strategies decrease the frequency of manual intervention?
  3. Could you specify what 'never having success using order_target_percent() means?' I was under the impression that using that method was fine as long as the algorithm was programmed to follow strict positioning rules (e.g. if a determined stop price for a position is reached, cancel all open orders and set target_percent to 0).

Lastly, would you be able to tell me what a decent sharpe cutoff is for an real-life trading algorithm? After livetesting (not real money testing) with a algorithm, at what minimum value of sharpe would you say is high enough to trust it with some real money?
While I haven't run in to real money trading algo drawdowns, I have to say that you must have quite a pair on you to tolerate drawdowns of more than even 10%.

Again, thank you very much for the sound advice

Hello Damon,

Great questions. I would pay attention to everything Luke said, he hit on some very key points.
1. We do not currently have bid/ask data.
2. The main thing is to think about your algorithm as a state machine. You get a current portfolio state, compute a desired one, and then order the difference. Beyond that everything is optimizing your orders so as to pay fewer transaction fees and slippage.
3. Large orders will not immediately fill, they will gradually be filled as sales take place. As such if you plan to be a large percentage of the market for any instrument, you have to be a lot more careful. It's very easy to put out an order, have it not fill, check again in an hour, put out a second one, and then come back an hour later to double the amount you wanted.

In general the lower AUM you're trading, the higher Sharpe you can get. Some prop shops can push Sharpe up past 2, but there's debate about whether that's sustainable and whether you can do that at mid frequency now. I would say start with Sharpe greater than 1, however it is very important to remember that Sharpe is just a relative number for comparisons. It's more about what your best alternative is, not about the absolute value. In practice remember that algo trading is highly AUM dependent. The types of strategies you would develop to run a smaller amount of personal capital are often very different from the type that, for instance, Quantopian is seeking to make allocations to.

As far as instruments, just keep in mind that the fewer instruments you use, the fewer bets you're making. As such your models have to be correspondingly better. Generally it's better to add a lot more instruments to a strategy. For a strategy that trades more instruments you might look at something like a pair trade.

Lastly, keep in mind that your goal with any kind of algorithmic trading is the same. Maximize out of sample alpha*, subject to risk, universe, and slippage constraints. It is an optimization problem and why we developed our optimize API. Just swap in whichever constraints make sense for your problem and work away at developing predictive models that can produce alpha.

  • alpha is defined as returns unexplained by your risk model, so if you set your risk model to have no factors, alpha is just forward return

For more I would check out the following resources:
https://www.quantopian.com/lectures/portfolio-analysis
https://www.quantopian.com/lectures/long-short-equity
https://www.quantopian.com/lectures/example-long-short-equity-algorithm
https://www.quantopian.com/lectures/factor-analysis
https://www.quantopian.com/lectures/the-capital-asset-pricing-model-and-arbitrage-pricing-theory

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Often when I'm trying to figure out how to do something in Quantopian I'll search through the forum to see if there's some code that'll show me how to do this or that. Typically I'll come across some algorithm where the author has posted a backtest that displays great results -- everybody agrees it's a great algorithm. So I'll clone it, run a backtest starting from the day it was posted until today, and almost invariably the algorithms lose money or at least severely underperform SPY and have insane volatility from the day they are posted. Almost every single time! It's shocking how common it is, and it's a real eye-opener. I think the same thing happened for most of the handful of early Q Open winners as well.

I think the lesson here is that it's extremely hard -- even for very smart people who should know better -- to not fall for introducing some sort of overfit bias when they are coding their algorithms against historical data. You need to apply rigorous skepticism to your own work. Always reserve some time period for an out-of-sample backtest, and then test in paper trading on live data. Always be conscious of the various biases you could be introducing into your work.

My best algo so far that I believe is mostly free from overfit bias is 3.85 sharpe (1.10 alpha!) in a 2006-today backtest, which includes an out-of-sample time-period. Currently paper-trading it to confirm... Unfortunately it's not the type of algorithm that would scale well. I posted another algo to the forum the other day that has 1.94 sharpe (2006-today) -- also appears it wouldn't scale well. Considering that I was a humanities major and that the last math class I took was in high school, and I almost failed, I think your goal of 2.0 sharpe is perfectly reasonable.

You can basically sculpt any kinds of backtest stats you want, but if it's not free from bias it's worthless.

I'm also just getting my feet wet here I can't tell you anything super insightful, but if I were you I'd be more open to possibilities. I doubt you'll get over 2.0 sharpe working just with UPRO or SPXL and SPXU. You might possibly be able to get significant alpha, but you'll get huge drawdowns. I wouldn't be so afraid of 10% drawdowns. Those aren't so bad. If you simply buy-and-hold SPY you'll get worse than that! But if you want to minimize drawdowns it'll require diversification. I think you can get 2.0 sharpe and single-digit drawdowns with a different kind of strategy -- one that is more diversified against any single risk-factor. When you happen upon an algorithm generating significant alpha, high sharpe, low beta, market neutral you may find it matters more to be on a brokerage platform that supports shorting than it matters to not pay commissions. So in short my other main piece of advice is don't be too set on a specific criteria -- Just work on different ideas and be open to whatever you discover along the way.

I'd be curious to hear people's success stories -- what kinds of stats people are getting on their live-traded algos.

@Delaney Granizo-Mackenzie
I've never seen AUM before so I'm going by the google definition of 'Assets Under Management.' Referring to your statement:

"In general the lower AUM you're trading, the higher Sharpe you can get. Some prop shops can push Sharpe up past 2, but there's debate about whether that's sustainable and whether you can do that at mid frequency now."

You mean that algorithmic trading with less capital/assets means a higher possible sharpe and firms managing large amounts of capital/assets typically don't manage to reach/maintain a sharpe of 2? Also mid-frequency, generally speaking, refers to strategies function on the 5m-30m correct?

I appreciate all the helpful links.

@Viridian Hawk,
I do plan to keep an open mind in terms of trading strategies and I do appreciate the recommendations to include more instruments from both of you After some success in research today, I will at some point use pipeline and trade more diversified assets. However, I am fairly confident that my plan for a 'long-only' spy 3x leveraged dual-direction algorithm can be done with enough hardwork and enough sophistication. But who knows? After my findings today, I wouldn't be surprised if I scrap this idea after a pipeline-using algo born of curiosity blows me away.

Thanks for your help guys, the community here doesn't cease to astound me with its hospitality
Also, I too would enjoy hearing some stats on live-traded algos just to get a clearer picture of what I should be aiming for. Hope to hear from others

Lots of great tips in this thread!

To respond to your questions:

I was aware that historical bias and curvefitting are sinister threats to the algo trader but beyond swapping constants for dynamic values as paramteters and backing changes with theory, is there anything else I can do?

It's not about not having constants, it's more about the sensitivity of your algorithm to the constants in question. My algorithm has a parameter that is set to look at 17 day history, to get it's best results. It's a very specific look back period but it gets good returns with pretty much any medium-term look-back period so I'm not worried. It's OK to have a constant parameter that "means" something, that gives your algorithm addition information about the market condition.

I'm not too familiar with order execution involving automation but now you've riled up my curiosity. I'll look into dealing with bad/rejected order. Am I okay to presume that overnight holding strategies decrease the frequency of manual intervention?

Delaney said it better than I did, basically, if your order isn't filled instantly it opens you up to a few different problems, basically you lose the ability to control how much stock you get with order_target_percent() because it can't detect that you already have open orders placed. Now that effect is compounded by Robinhood because you will usually want to use limit orders to get the best price. Limit orders also delay the execution time of an order which goes back to the "low liquidity" problem. You should never NEED to manually intervene, you can code a robust enough algorithm to cover every edge-case.

Could you specify what 'never having success using order_target_percent() means?' I was under the impression that using that method was fine as long as the algorithm was programmed to follow strict positioning rules (e.g. if a determined stop price for a position is reached, cancel all open orders and set target_percent to 0).

You might have special conditions or modifiers that you want to apply before you order a stock, like minimum thresholds for executing an order, tracking day-trades, or some risk management. One example is that if you use order_target_percent() you can't know if a stock is going to sell or buy easily, so it's hard to tell the algorithm to sell first. If you don't sell first then the buy order is executed with money that isn't in your account yet (rejected order or high leverage). By the time you calculate if an order is going to buy or sell first, and account for that you've already done all the calculation needed to just use Order(). For these reasons I like to calculate exactly how many shares of each stock my algorithm needs to order and the exact order in which to execute those trades. It just makes everything a lot better when it comes to real money trades. It's OK in a backtest to order with Order_target_percent() but it's not really ideal.

I'd be curious to hear people's success stories -- what kinds of stats people are getting on their live-traded algos.

It's hard to say because I started small and added money every month (doing so means I had to restart my algorithms because it counts that added money as "gains"). I also fought some really hard-to-pin-down bugs in my code, involving fetch_csv(). (PROTIP: do not use fetch_csv in live trading!!) Each time my algo would error out it would need to be restarted. Of course every time it restarts I lose my record of gains/losses. To make it worse I used two different robinhood accounts (mine and my wife's) to trade 2 algorithms at once. But after all that hassle I'm up ~30% since November 2016 with <5% drawdown (not counting screw-ups from me or quantopian). My sharpe is somewhere around 1.5 if my backtests are to be believed.

Timing the market is widely recognized as an extremely difficult problem. If you do manage to solve it for the S&P500, don't you think you could apply that knowledge either directly or indirectly to trading other securities? You'd get the added benefits of diversifying your risk and increasing capacity. Anyways, just be careful that your "enough sophistication" isn't overfitting! If you do manage it, post back and let us know what kinds of results you're getting. I'm curious what method(s) you're thinking of employing.

Also, keep in mind that while 3x leveraged ETFs can theoretically produce near 3x returns in strongly trending markets, in a sideways market they decay in value due to the increased compounding of daily losses and gains.

Look what happens at 3x if you have big jumps up and down:
at 1x: $100 + 100% = $200 - 50% = $100
at 3x: $100 + 300% = $400 - 150% = -$200
In this extreme example, at 1x you break even while at 3x with the same market movement you end up -200% in the hole! SPY obviously doesn't jump around to such extremes on a daily basis, and 3x ETFs don't expose you to any more than you paid for it, but nonetheless all those little moves up and down slowly contribute to the same compounding value decay effect. All leveraged ETFs warn you to avoid holding them for extended periods of time -- ideally not even over night.

@Damon

Yes the smaller your AUM the less slippage you will face and so the more nimble you can be. You can even trade small cap stocks that a large position couldn't touch due to lack of liquidity, and because the big time managers can't trade through small caps there are plenty more inefficiencies to arbitrage away. Basically the larger $ / stock you want to trade the more costs and restrictions you'll face. Your realized Sharpe will then be lower.

I have been able to find algorithms posted more than a year ago that will still generate a profit with a more recent backtest, although I'm not sure how well they perform in live trading. A lot of them had 0 commission so they were meant for Robinhood, and I saw another post saying that Robinhood places market orders as limit orders at 1.05 * the price. It might be more realistic to use limit orders instead of market orders for the backtest, and from reading posts on other forums it seems like experienced traders do that anyway.

@Eric, you can also shoot yourself in the foot with limit orders. A too conservative limit buy order on an asset that's rapidly gaining value won't fill, and you can lose out on those tremendous gains. Likewise, a too ambitious limit sell order can leave you stuck with an asset that is tanking. I think it's smart to use limit orders, but you should really take into consideration the stock's current movement when determining the threshold.

Hey Damon,

I have a few comments and a question. I'll start with the comments.

You wrote:

"My goal is to create an algorithm with a sharpe of at least 2.0 (I have managed to reach 1.6 with an algorithm before) that will trade UPRO or SPXL and SPXU so that it can be run 'long-only' through Robinhood to avoid commissions. I would like to put an initial $40,000 into this algorithm."

I applaud you for having goals. Without goals in life, where would we be? Haha. But I'd like to offer you a perspective you may have never considered before.

Have you ever considered having goals is a counter-productive way to approach your trading? The overarching goal of professional speculation is to create a strategy with a sustainable edge and to manage your risk along the way. When I say “a strategy with a sustainable edge”, you could also think of it as one that has positive expectancy – ie. the combined odds (winning %) and the expected payoff are sustainable into the future and in your favor. Think 50% win rate and getting paid 2-to-1 every time you place a trade. The odds are clearly in your favor so you’d want to play that game over time.

Now why don’t we try to think about it in a similar, but slightly different context from algo design. Take a professional poker player’s perspective. They’re counting cards and keeping track of what the other players are doing. All in an effort to discern what the other players might be holding. Then, based upon that information, they size up their expectancy (odds of winning and expected payoff) and make a bet accordingly. They might think, “well Joe over there checked on fourth street and raised on the flop. I think he’s holding either two pair or he’s working on an outside straight. I’ve got 3 of a kind. Right now, I need to risk $X to potentially get paid out $Y and the odds of me winning are approximately Z%. So yeah, this bet has positive expectancy. I may not win the hand, but if I take this kind of bet over and over again, I’m going to come out ahead in the long run.”

That’s how you should be looking at your algo design. Setting a performance goal like “sharpe ratio of at least 2” introduces unintended consequences and counter-productive thoughts. It’s the equivalent of a professional poker player saying, “Ok… I need to make $1,000 every hour. I gotta do it. It’s my goal” What might happen if he imposed that kind of demand on his decision making? He’d take shots that just weren’t there because of some arbitrary, self-imposed goal. “Well, I’ve only made $300 so far this hour and I’ve got 10 minutes left, I guess I REALLY have to win this hand if I’m going to meet my goal.” But what if he got dealt a mediocre hand? Should he try to meet his goal even when the opportunity isn’t there? That would be what’s going on in his head and it’s the opposite of what he should be doing.

Is it possible your self-imposed goal of “sharpe ratio of at least 2” could cloud your judgement as you’re designing your algorithm? Only you can decide for yourself, but in my experience it’s one of the silent killers of aspiring quants.

Here’s a piece of advice:

Take some time and look into the performance track records for professional money managers. The hedge fund space would be a great place to start. Go look at the firms with the longest track records (20+ years) and examine their sharpe ratios. Are they above 2? Are they even above 1? These are some of the best and the brightest in the world. REAL people who have managed and compounded REAL money over a career. What do their performance track records look like? And what does it make you think about the goals you currently have in place?

And lastly… a question.

You wrote:

“... but have not yet live traded any algorithm so far although the backtests showed potential.”

Why is that? Does the answer start with a “C” by chance? ;)

@Jason Klatt,

If the answer that starts with "C" you are suspecting is "College," then you are correct. After building my most promising algorithm on QuantConnect, I went through a wandering mind phase where I wasn't sure what I wanted to do. My solution was to take generally required courses and drink on weekends while waiting for my subconscious to throw me a decision; this proved to be a prudent choice as I suddenly realized that I missed the algorithmic trading scene toward the end of spring 17. By this time, I had forgotten how to read and write C# so I taught myself python on the fly and migrated here.

You called my attention to a pretty important point that reminds me of a chapter in "Reminiscences of a Stock Operator" telling the story of men bent on "making the market pay for their lunch" only to lose the money. I get where you're coming from and I'm very thankful for the recommendation to scope out real returns and sharpe ratios via example.

@Luke, Viridian, Eric,

The warnings regarding proper order management have been duly noted. I would not have seen such things coming alone. Credit for the vital following revisions to my algorithms and my future coding practices go to all of you.

@Viridian

Prior to reading this, I gave it great thought while taking the train and you are right; if I had a magic black box that could turn money into more money when fed a stock symbol should be fed many stock symbols to see what feeds it the best. Also the 3x ETF decay issue was lingering in my mind when I started this thread but your quick example has solidified my rational concerns.

I actually had an algorithm giving me euphoric backtests (~53% a year, ~1.87 sharpe) that I later realized, aptly put, had the s*** curvefit out of it. It was heartbreaking but I knew it needed to be scrapped so I did and I started over from scratch. Believe me when I say 'sophistication' will not involve curvefitting if I can do anything about it.

I've still got a few things in construction in terms of market-timing logic but I've hit something promising that could prove to be my 'secret sauce.' I'd love to share some of the 'sauce' once the recipe starts coming along but for now I'll keep it under wraps and just say that as a college student without much knowledge on mathematics/finance/computer science, I derive performance from un-obvious combinations of common tools.

For example, the scrapped curvefit algo consisted of basically 8 (4 long 4 short, mirrored) entry "clauses", which involved price action patterns, 8 period EMA, multiple SMAs, and a 20 period EMA, and a trade 'filter' which only allowed positions to be entered if the abs value of the 18 period RoC of the average of the 14 period EMA based on highs and 14 period EMA based on lows was greater than 0.0009. The purpose of the complex filter was to only allow trades if a stock's price was moving fast enough. Writing out all the clauses is tedious so I'll give a relatively straightforward one as a short example. One entry clause was that if the abs value of the filter RoC value was indeed higher than 0.0009 but a standard 8/20 ema crossover did not happen yet, the algorithm still entered if the distance between the 8EMA and 8SMA was increasing for the last 3 bars (given that price action criteria were met).

I rely a lot less on pre-existing technical indicators now and i guess in a sense "code my own" using price data and other statistical methods.

@Luke Izlar

Would you mind sharing how long it took for you to reach the point in nov16 where you had algorithms competent enough for real money trading?

@Everyone
Thank you for the help.

@Damon

Interesting. I thought you were going to say Confidence not College. ;)