Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
How to do global optimization

How do I optimize globally across the entire data set, save the optimal values/table, and use them in a backtest ?
Short of cutting and pasting those values in and out of the logs.

Say for example, I am calculating the shiller PE and want to do cash/stock asset allocation using the Shiller PE level.
It doesn't make sense to "optimize" the relationship between shiller-PE and the asset allocation persentage based on just the trailing data as the backtest is progressing, because these sort of fundamental optimizations required a long data set.

I am wondering how to optimize asset allocation ratios, maximising total returns relative to a list of fundamental ratios/fields, across an entire data set, then save and use that relationship/table(shiller PE ratio Vs Asset allocation ratios) during an actual backtest in different algorithms

Sarvi

3 responses

disclaimer: does not address the question "how to do global optimization" but "should i do global optimization"

Imho doing something like a "global optimization" you are proposing is very dangerous. It would basically mean that during your backtest you assume knowledge you don't actually have at that point in time (i.e. you assume some kind of predictability or filtration which is too fine).

furthermore, as far as I know, optimizing asset allocation weights directly is not very common. you would rather build some kind of factor model (and fit it using fama-macbeth regression), than estimate the future returns based on these factors (forecasts?) and plug them in some kind of portfolio optimizer (markowitz, black-litterman etc.)

but again, this is just my opinion. i am very biased towards starting from an economic idea and than testing it instead of doing datamining and trying to fudge a story which explains the results afterwards.

very interested in what others have to say. btw. this is a good read regarding this topic: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2308659

You make a very good point about global optimization having/creating a look head bias,
and its not something I haven't considered.
But here is a dilema, any sort of trading algorithm that works in essence is something comes out of a statistical analysis of patterns from history.
OR we have observed it and hence try to exploit it.

When for example, robert shiller claims that shiller PE ratio at its extremes increases/decreases the risk/reward ratio of stock investments.
He has done a statistical analysis of this data from history, and draws his conclusion from there.

Now, if I am to take the extremes of historical shiller PE and even, say manually, I assign risk and include it to be part of the algorithm, is that look ahead bias? Yes. But then does it make for a bad investment strategy/algorithm?

What I am trying do here is to avoid a manual risk assignment, but instead evaluate it statistically and use it as a feature(amongst others) that contributes to the overall risk of the investment.

Sarvi

i completely understand your point. of course most systematic strategies analyse the past and try to make their decisions based upon the information they can extract and I dont see a problem there as long as you do out of sample testing. however, by using the entire dataset for your optimization, you dont have any data left for an out of sample test unless you produce "new" data using some kind of resampling approach (which again is another topic ;-))