Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
maximize sharpe

I wrote a small utility function to calculate the weights of a maximum sharpe portfolio given the covariance matrix and expected returns. Thought I share it with community. source stackoverflow

def max_sharpe(cov, expected_returns):  
    n = len(expected_returns)  
    onesT = np.ones((n,1)).T  
    covis = np.linalg.inv(cov)  
    p1 = np.dot(onesT, covis)  
    p1 = np.dot(p1, expected_returns)  
    p2 = np.dot(covis, expected_returns)  
    w = p2 / p1  
    return np.ravel(w) / np.sum(np.abs(w))
11 responses

@Pravin, Your statistical expertise is always evident in your posts. And, as always, thanks for sharing.

So, maybe you could show us how to use your "max_sharpe" in a sentence...

And, although it might be beneath you to do so, I wonder if you might include a comment next to each line describing, briefly, what the line's intent is?

def max_sharpe(cov, expected_returns):  
    n = len(expected_returns)              # How many securities to process  
    onesT = np.ones((n,1)).T               # ?  
    covis = np.linalg.inv(cov)             # ?  
    p1 = np.dot(onesT, covis)              # ?  
    p1 = np.dot(p1, expected_returns)      # ?  
    p2 = np.dot(covis, expected_returns)   # ?  
    w = p2 / p1                            # ?  
    return np.ravel(w) / np.sum(np.abs(w)) # ?  

Your username is also misleading :P

I also want to know what the purpose of this code is, in what context it would be useful, and a simple backtest with this as an example. Not to demand, but I think it would be nice for those of us who are slower...

As someone who's python is less than excellent, I'd like some short notes as well.

I could definitely use something this ! But I'd like to understand it a bit better first :)

Something like this*

And thanks for all your contributions !
Time to change your username.

Sorry to come in and add to the demand lol

Thanks. I will post an algorithm with comments shortly . My statistical skills are just as rusty but I guess I am sometimes brave, foolishly though, to attempt advanced stuff. More often than not my algos are ridden with issues which I realize later as I progress. Hence the name beginner :)

Here is a sample usage with comments. However leverage shoots up on 1st July and I have contact Q support to find out why. Otherwise strategy looks okay.

Thanks for your efforts. Still too many years of statistical mastery for my tiny brain though.

When I look at code like yours above I think to myself "Here be Magic".

def max_sharpe(context, covarianceMatrix, expected_returns):  
  securityCount = len(expected_returns)  
  # .T transpose the ones row array  
  onesAsColumns = numpy.ones((securityCount, 1)).T  
  # Compute the Cholesky decomposition of a matrix.  
  choleskyThing = scipy.linalg.cholesky(covarianceMatrix, lower=False)  
  inverseCovarianceMatrix = CholeskyInverse(choleskyThing)  
  dotProductOnesXInverseCovMatrix = numpy.dot(onesAsColumns, inverseCovarianceMatrix)  
  dotProductOnesXInverseCovMatrixXExpectRets =numpy.dot(dotProductOnesXInverseCovMatrix,expected_returns)  
  dotProductInverseCovMatrixXExpectRets = numpy.dot(inverseCovarianceMatrix, expected_returns)  
  securityWeights = dotProductInverseCovMatrixXExpectRets / dotProductOnesXInverseCovMatrixXExpectRets  
  daisyChainedSecurityWeights = numpy.ravel(securityWeights)  
  normalizedSecurityWeights = daisyChainedSecurityWeights / np.sum(np.abs(securityWeights))  
  return normalizedSecurityWeights  

It's probably just the fact that statisticians are capable of building these vast maps of referenced terminology; but I find the terseness of most high level python code to be like fumbling in a foreign language pocket book.

a, b c d e f g h i.

Where:
a = Frankly
b = my
c = brain
d = can't
e = handle
f = all
g = of
h = these
i = references

@MarketTech. Now my eyes ache when I see that code. Using terse variables is so much cleaner and close to mathematical representation.

Yeah, that's a bit much I agree. And I'd never go to that extreme. But your comment about being math is spot on. You guys think of all of this as just mathematical extensions and I think of it as obfuscated wizards language.

@Pravin

An adjustment to that code needs to be made to keep leverage in check. Dividing through the weights by the sum of the magnitudes of the individual weights will effectively set leverage to stick near 1.

@Pravin. Thanks for your code. Still, there is something that looks strange to me: when you normalize your weights, you divide by the sum of the absolute value of the weights. That is ok, if don't allow short transactions. However, that will not normalize properly if you have negative weights. I think it is ok to just divide by the sum of the weights.