Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Examples of Parameter Optimization?

I'm new to Quantopian, and rather excited to see some of my ideas take shape. I was looking for ways to optimize certain parameters (such as the time periods on moving averages), and came across this post which I believe is from last year or so and not sure if there were any updates:
Blog entry on Parameter Optimization

Has anybody heard of any of these techniques actually being implemented in an algorithm? I am not sure how to go about 'teaching' the algorithm to find the best parameter each frame (machine learning is new to me).

If you have an example, or have seen an example of actually implementing walk-forward optimization for, say, the three time periods involved in MACD calculation, please share! If I find a good way I will share as well.

28 responses

Hello Oskar,

You probably need to look at zipline which is the open source backtester that Quantopian utilises - see http://shelby.tv/video/vimeo/63273425/zipline-in-the-cloud-optimizing-financial-trading-algorithms

Thomas may be able to comment further.

P.

Peter,

Thanks for your response. I watched Thomas' talk and it is definitely along the lines of what I'm looking for. I would imagine that these techniques would not be reproducible in the Quantopian IDE, so I would have to do it on my local machine, no?

He only briefly mentioned walk-forward optimization, but it seems that it may be something you could build into an algorithm that could run in the Quantopian environment, such as if you started with a guess for a parameter and then had a learning period before executing any actual trades. Then once you had an initial 'optimal' value, it would learn as you went forward in time, adjusting all the while. In fact now that I think about it, I don't see how that kind of optimization could be done 'outside' of the algorithm itself, such as when Thomas calls the algorithm multiple times with different parameter values, it would have to be done as the algorithm is working through the time-series data.

Hi Oskar,

I have had some success optimizing zipline algorithms using hyperopt. This is not in a walk-forward way however. The problem is that you need to be able to run an algorithm over a certain period and then rewind, change the parameter, test it again over that period. And do that going forward. I think zipline algos are pickable so that's a possibility.

Neither of this is possible on Quantopian currently but I hope that it will be some day...

Thomas

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Thomas,

If I were to write this outside of Quantopian, how difficult would it be to have it live traded? I wouldn't be able to use the neat features of Quantopian anymore obviously...

Hi Oskar,

You could optimize it outside and use the parameters in your live-trading algo. That's probably the only way until we integrate it.

Thomas

Hello Oskar,

An option may be to use IBpy ( https://code.google.com/p/ibpy/ ) to trade a live Interactive Brokers account. This is very new to me i.e. the last couple of hours but I have installed a local version of Trader Workstation with API access enabled and installed IBpy. TWS can now (sometimes!) accept connections on 'localhost' from a Python application using IBpy.

Tom S. posts here occasionally so he may be able to advise. He has written about it here: https://www.leinenbock.com/market-data-feed-from-interactive-brokers/

P.

Thanks for the advice guys! I'm thinking I will optimize it manually (perhaps run a couple backtests with various settings or do it outside Quantopian).

Hello Oskar,

The topic of walk-forward optimization hasn't gained much traction on Quantopian, but it would be interesting to see an example. Basically, a fitness function needs to be written. Then, a set of parameters (e.g. the portfolio state) would be periodically adjusted to maximize the fitness function. I have it on my list of things to try, since Python has a bunch of canned optimization routines (e.g. http://docs.scipy.org/doc/scipy/reference/optimize.html).

Grant

Hi Grant,

Note that a lot of those optimization routines require evaluation of the gradient (fmin and fmin_powell are non-gradient) which is often difficult or impossible to compute. Here is a neat paper by Moody about using reinforcement learning to optimize a specific algorithm (would love to implement that on Q): http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.1.7210&rep=rep1&type=pdf

I have some shelved work where I use hyperopt to optimize an algorithm (not walk-forward though) which I should get back to soon.

Thomas

Thanks Thomas,

I was thinking of trying one of the global routines, such as simulated annealing.

Grant

Hi Grant,

Yep, that makes a lot of sense. Let me know if you have any interest in collaborating on using hyperopt (which also has simulated annealing but can run parallel). I have the prototype code working already.

Thomas

Thanks Thomas,

Is hyperopt available w/ Quantopian? Or would I need to run zipline?

Grant

Someday :). For now this is zipline specific. I went back to this today and recent changes in hyperopt actually make this much easier.

I'll upload the IPy NB soon for you to take a look, I really think this is the way to go for optimizing trading algos.

Does hyperopt have any documentation/examples? --Grant

Hello Thomas,

For me, initially, it would be more convenient to work on an example in Quantopian, rather than offline using zipline. In the back of my mind, I've been trying to conjure up an approach for a fitness function. It would need to capture the projected risk-reward for the next period in the backtest (i.e. project forward in time to the next minute/day), and re-allocate accordingly. Also, to be realistic for individual/retail trading, the max. portfolio size at any given time needs to be around 5 securities (although the universe could be much larger). I could be mistaken, but it seems that if the portfolio is much larger than 5 securities, then you are basically trying to set up your own actively managed mutual fund/etf and would be better off just putting the money into a few passively managed index funds/etfs.

Grant

Hi Grant,

Not sure why you need to project forward in time for this? Can you elaborate?

An alternative would be to try each new parameter on a different set of data as the backtest goes forward in time. That's a similar idea to stochastic gradient descent which also uses different samples for each training step.

Of course the issue here is that your objective function changes at each evaluation which can make your optimization go haywire, so you need enough representative data in each step. It could work in minute mode where you probably have enough data so that you could e.g. try a different parameter each day. Then feed the sharpe ratio (or alpha or whatever) of that day into the optimization algorithm as the objective function and evaluate the next point.

Thomas

Thanks Thomas,

Perhaps "project forward in time" is the wrong phrase, but basically the way I think about it is that at t = 0 (with access to a trailing window), the objective function needs to be minimized, so that at t = 1 tic, a correction in the direction of "goodness" has been applied. So, the decision made at t = 0 is, in effect, projecting forward. It's like holding onto a taught rope in the dark...when you are hanging onto the rope with just your finger tips, your next step needs to be closer to the rope. And if you feel the rope rubbing against your side, it is time to step away from the rope. So, your interaction with the rope predicts the best next move. So, if the market had a hope we could hang onto...

Grant

Thomas,

Here's a reference on simulated annealing:

Simulated annealing for complex portfolio selection problems

On Google Scholar, it was cited 171 times, so there must be other relevant papers, too.

I'll have a look when I get the chance.

Grant

Hi Thomas,

I ain't no expert, but I gather that a class of problems in trading optimization falls under the heading of quadratic programming (http://en.wikipedia.org/wiki/Quadratic_programming). I believe the OLMAR algorithm is one, right? In the case of OLMAR, there happened to be a solution for the case of an inequality constraint, but I gather that this is not always the case. One approach we could take would be to solve the OLMAR problem as formulated, but use an iterative solver (e.g. simulated annealing, genetic algorithm, particle swarm, etc.). Then, we could work on adding in additional constraints that could be handled only by the iterative approach.

Grant

Hello Thomas,

Another brief note...it'd be interesting to consider how to carry out computationally intensive work outside of Quantopian and then automatically (e.g. daily) feed the result via fetcher to an algorithm.

Grant

Hi Thomas, can you share the IPy NB for using zipline with hyperopt. I saw your zipline optimization video and really interested in trying it out. Thanks!

Hi guys,

This is still very experimental but this is how far I got some while back: http://nbviewer.ipython.org/gist/anonymous/8519035
Seemed to basically work but some other stuff took priority.

Grant: That's a pretty neat idea. With the new zipline API this should be even easier.

Hello Thomas,

What do you mean by the "new zipline API"?

Grant

Hi Grant,

Sorry for just dropping that there, I somehow assumed I had mentioned this.

I'm in the process of changing the zipline API to be more compatible with what you find on Quantopian. That way, you will be able to write an algorithm in zipline, debug it etc, copy&paste it to quantopian and run with minimal changes.

Here is an early merge of some basics: https://github.com/quantopian/zipline/commit/b69590a2f709c70dd14d817d1a6bee0b1bb0e7b0 and an example: https://github.com/quantopian/zipline/blob/master/zipline/examples/quantopian_buy_apple.py

Hi Everyone,

I'm curious if Hyperopt is integrated into the IDE or if there's an updated example.

Cheers,

Amir

Hi Amir,

We're currently doing some research with Whetlab.com on this but it's still early days and there's no ETA on when we'll integrate that into the IDE. It's a large (but exciting!) project.

Thomas

Hi Thomas,

This sounds great. I look forward to learning more.

Cheers,

Amir

Optimizing parameters is not the end. You can optimize anything if you know gradient descent algorithm, which is pretty simple. Here are the challenges however.

1) You could be stuck in local optima. You need to run the optimizer multiple times to increase the probability of arriving at a global optimum. It's a challenge if your cost function (which you don't know) doesn't have a smooth behavior.

2) Assuming you have found the global optimum. There is no guarantee that the future will behave like the fast.