Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Min variance optimization - any experts please?

I tried minimum variance optimization using two methods and both give different weights even though the formulation is the same? How is that possible? Any optimization experts in the forum please help?

The first method uses cvxopt (I like this more):

def getWeights(covariance, signal):  
    P = matrix(covariance)  
    (m, m) = np.shape(covariance)  
    q = matrix(-signal, (m, 1))  
    A = matrix([1.] * m, (1, m))  
    b = matrix(0., (1,1))  
    res = solvers.qp(P=P, q=q, A=A, b=b)  
    return res['x'], res['status']  

The second method uses scipy minimize cobyla:

def getWeights(cov, signal):  
    cons = []  
    (m,m) = np.shape(cov)  
    cons.append({'type': 'ineq', 'fun': lambda x: np.sum(x)})  
    cons.append({'type': 'ineq', 'fun': lambda x: -np.sum(x)})  
    for i in range(0, m):  
        cons.append({'type': 'ineq', 'fun': lambda x, i=i: 1 - x[i]})  
    x0 = [1.] * m  
    res = minimize(lambda x: np.dot(np.dot(x.T, cov), x) + np.dot(x, -signal), x0,  
                   constraints = cons, method='cobyla',options={'maxiter':20000})  
    print res.message  
    return res.x, res.status  
1 response

Fixed it. This formulation of scipy minimize gives the same results.

def getW(cov, signal):  
    cons = []  
    (m,m) = np.shape(cov)  
    cons.append({'type': 'ineq', 'fun': lambda x: np.sum(x)})  
    cons.append({'type': 'ineq', 'fun': lambda x: -np.sum(x)})  

    x0 = [1.] * m  
    res = minimize(lambda x: 0.5 * np.dot(np.dot(x.T, cov), x) + np.dot(x, -signal), x0,  
                   constraints = cons, method='cobyla',options={'maxiter':20000})  
    print res.message  
    return res.x, res.status