@Dan,
Thanks so much for your feedback! I really appreciate it! It's all spot on too. None of my factors are very symmetrical, and in general I do tend to think of 'Long' ideas, and then short the opposite. I might try to think of what stocks I'd like to short instead (over-valued, highly leveraged, low return on capital, etc) and long the opposite instead. As Charlie Munger has been known to say: "Invert! Always invert!"
Having different weights for longs and shorts for each factor has also been on my list of things to try, and with the (round_trips = True) switch on in the full tear sheet I think it's evident that it might indeed be something worth exploring (minor tweaks mostly though as I worry about overfitting if I do too much).
Stop-loss orders I've tried, but I don't think Optimise API supports them (I can understand it would be difficult to implement). Blacklisting though is a great idea though which I hadn't of and which I'd like to try. Just need to figure out how to best do it first. :) Doesn't sound like it should be too difficult but I'm a bit of a Python novice still.
@Jess & @Delaney,
Thank you so much for taking the time to provide feedback. I really appreciate it! I've only been on Quantopian actively for about 3 months now and it's been a steep learning curve (which I enjoy) so it's really encouraging to hear that I'm on the right track.
@Delaney,
Great suggestion on #2. As you said, I'll need to spend some more time with AL to analyse each factor individually to try to see if I can find any seasonal trends (and predictability of longs vs shorts), and try to adjust the weights accordingly without overfitting. Thanks also for your comment on #1. I'm glad you don't see it as a big issue. My aim is still to try to have sharpe remain above 1 consistently, even if it means the peaks will need to come down a bit. The challenge as always will be to not overfit (even if unconsciously and unintentionally). I'm hoping this will be possible if I can reduce the impact of the short losses by adjusting weights.
Your comments and suggestions on #3 is spot on too. I think I have enough factors to work with, and I will indeed spend more time with AL to understand the strengths and weaknesses of each one of them and hopefully be able to make rational adjustments to reduce losing trades (and potentially trading costs). Would you recommend picking a sample period where the algo doesn't do too well in terms of sharpe or drawdown? For example April 2013 to April 2014 when sharpe is tanking? Or end of 2013 to end of 2013 based on the Underwater plot?
Also, would you be able to expand on your comment below? I understand the concept but not how to implement it when using Optimise API and Maximise_Alpha? How would I go about doing this? Could you point me in the right direction please?
You could even rotate your shorts far less frequently than your longs
to reduce trading costs.
A separate question: while I've specified to rebalance daily XX min after the open, does Optimize API automatically VWAP large orders over the day in order to get best fill prices (I imagine your PB would do this)? If not, would you recommend me implementing this to be able to handle more capital and get better fill prices? Looking at the tear sheets it seems to me that it's already doing this, or am I wrong? If that's the case, would it not be better to start directly from the open (MOO essentially, volume weighted) as volume tends to be highest at the open (and around close)? Or is the potential slippage cost from the higher volatility right after open not worth it?
@Jess,
Neither Commission nor Slippage was specified in this backtest so it should be using the default ones, which I believe is indeed 5 bps cost at 10% volume per minute bar. Just to be sure though, I've now specified this under Initialize and I can't really see any difference in returns.
def initialize(context):
set_slippage(slippage.FixedBasisPointsSlippage(basis_points=5, volume_limit=0.1))
Awesome tip on the 'historical Out-of-Sample' test as well! My research was mainly between 2014-2016 and then 2016-2018, so the attached (2011-2014) is somewhat out-of-sample, and it shows... While not completely horrible, the returns are not nearly as good, and unfortunately sharpe does indeed go quite a bit into negative territory for a while. This I'll need to address.
I would have liked to run backtests from further back (especially during GFC) but unfortunately I get the below error when I try anything earlier. I think it's because one of the Fundamentals from Morningstar that I'm using may not be available then. Does that sound correct?
TypeError: MaximizeAlpha() expected a value with dtype 'float64' or
'int64' for argument 'alphas', but got 'object' instead.
Though it may be of less interest now until I've addressed the negative sharpe, I'll still send you and Delaney a direct mail explaining my high level rational behind the factors.
Attached tear sheet has slippage specified to 5 bps as above. I'll specifically specify this going forward. I'll attach another one for the full period as well, with slippage specified to 5 bps. Also, I wanted to show you full tear sheets with 100mm and over of initial capital, but unfortunately I run out of memory whenever I try this, even after killing all active notebooks and restarting kernel. Would this be related to amount of capital deployed or just a coincidence you think? As far as I can see, there's not much impact on returns with 10x more capital.
Again, thanks so much for the feedback! I find it all extremely helpful. I'll have plenty to work on for a while now but if you or anyone else have any more feedback, I'm all ears. Cheers!
PS: Something funny is going on with the Common Returns from end of 2013. Is this a bug perhaps?