Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
MinVar etc using scipy.optimize.minimize SLSQP leads to out of bounds solution

Scipy Repository So, if you are using this to optimise portfolio weights check that negative weights are not slipping in if you set the bounds to (0,1)
Looking for solutions, may have to adjust the algo defining the fitness variable etc etc

Others MUST have come across this problem here.

8 responses

The negative amounts are tiny in a 30 stock optimisation of a wide range of bond and equity funds. Probably the best practical solution is to re-scale the weights to a sum of 1. A horrible feeling fudge but very unlikely to have much effect at all.

Two obvious possibilities are that you've set the accuracy (I think its acc parameter) to be insufficiently granular. For example with an accuracy of 1 decimal place a value of -0.049 is technically close enough to be considered above zero. Second is that the optimisation has timed out at an infeasible solution. But to be honest stack overflow or quant.stackexchange.com are better places for asking generic python questions.

Thanks I'll go through it line by line in my IDE. I mentioned it here since the slsqp of Scipy.minimise is widely used in algos here on Q.
Happily I was able to download the weighting vectors into excel. It makes it a great deal easier to spot this sort of thing. Much more difficult using the Q development environment since no downloads are allowed.

The errors are minimal none the less may well account for the fact that people here have had to turn off the error catching mechanisms in their algos. Inability to reach convergence etc.

Minimise has a default of 100 iterations so I'll try increasing.

options={ 'eps': 1.4901161193847656e-008,'maxiter': 300, 'ftol': 1e-00000009})  
-0.00008362023796091800000000

No, neither increasing the tolerance nor the max iterations improves the situation. Since the negatives seen in the weight vectors are of this sort of order the common sense approach seems to be to zero the negatives and rebalance the vector sum to 1.
Again, I only report the matter here since scipy.optimise.minimise has been widely used for portfolio optimisation here on Quantopian. Perhaps CVXOPT or another optimiser will solve the problem. If so, I will post again.

Personally I've never had any problems using that function. But then I tend not to use vanilla markowitz due to it's known problems - my preferred alternatives are bayesian shrinking (which makes the optimisation more stable) or non parametric bootstrapping of the weights produced by multiple calls to .minimise (which means it doesn't matter if some of the solutions are extreme or slightly infreasible).

If you have the money the NAG libraries are the gold standard industrial solution. https://www.nag.co.uk/nag-library-python

Yep.
And there is a decent alternative using MAD here on the forum. Which is however subject to the same problems using an optimization algo. I agree with the bootstrapping or brute force or monte carlo approach. I demonstrated that in a spread sheet on one of the threads here on Q - where the spreadsheet demonstrated that scipy had clearly got the minvar portfolio "wrong".

I think the general Markowitz type of approach is excellent used in conjunction with rebalancing on a periodic basis. And the "errors" (such are they are) are small enough that one can safely zero out the negatives and re-sum the resulting vector to 1. Equally however a more "forecast free" approach may arguably be preferable anyway: EG relying on the greater stability of variance/ std dev to optimize a portfolio rather than the dual approach of returns AND variance.

I keep having to remind myself that finance is NOT a science. And that any such asset allocation including straightforward 60/40 methods will assuredly not produce identical metrics in forward investment as it has shown in back testing.

But in an imperfect world we all have to make assumptions and unless I am “trading” something with a built in market bias/advantage (collecting insurance premiums by shorting the vix, playing IPOs, capturing the B/O spread etc) I prefer at least an attempt to achieve a half sensible weighting scheme on a widely diversified portfolio of instruments than any other approach as far as investment is concerned.

Out of interest the problem with scipy is not just restricted to the bounds of (0,1) but also to the summing to 1 restraint. Which on a few test runs on a large portfolio has resulted in occasional sums as high as 107%

The answer is probably to use the Critical Line algo for Markowitz. Or alternatively operate a slight fudge (re-summing).

Many thanks for your help and comments.

On another point as to the advantages or disadvantages of the Markowitz or indeed any other approach to asset allocation I am ever more acutely aware that there is no single truth or answer or indeed any agreement in general. Diversification is good (?) (even essential in my view) unless you can predict the future - but even there, there ar many people who would dispute that forcibly. Particularly among some avid stock pickers and hedge fund managers who tend to load up the truck when they see a good opportunity.

I am putting out a product in conjunction with my computer scientist partner to let people run, in the cloud, simple tests on a number of different asset allocation techniques. These days I am totally averse to pushing any one approach or even stating any personal preference for any approach. Were I to write another book (god forbid - the last one was a total waste of time vanity project) I would emphasise until black in the face that as far as we are currently aware the future is unknowable and that therefore the only logical way to invest for the long term is to spread the assets over as wide a field as possible so as to diversify over currencies, economies, politics, tax regimes and of course product providers (banks, custodians, fund managers).

I'm only rambling on since like you I tend to be asked to speak or write or "do" stuff surrounding investments. I mostly turn such opportunity down since I believe most peoples advice including my own is biased and unreliable and that i would be better off keeping my mouth shut.

In terms of future writing and the cloud based back tester I am offering what this boils down to is admitting "I don't know". I am happy to present investors with some of the more apparently sensible and straightforward asset allocation techniques and leave them to make up their own minds whether such approaches have any value or not.

Markowitz may have its problems...well documented as you rightly say. But investment and indeed life is one big problem and it is a question of trying to steer between scylla and charybdis.

Probably about as much as we mere mortals can hope to achieve. Roll on super-intelligence and the singularity.

In the unlikely event that anyone has the slightest interest this sort of code sorts out the problem:

new_weights = pd.Series({returns.columns[i]: optimized.x[i] for i in range(n)})  
new_weights[new_weights <0.01 ] = 0.000  
b= new_weights.sum()  
new_weights = (new_weights/(b+0.0051))  
return new_weights