bnds = []
limits = [0,1]
for stock in context.stocks:
bnds.append(limits)
bnds = tuple(tuple(x) for x in bnds)
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)-1.0},
{'type': 'ineq', 'fun': lambda x: np.dot(x,x_tilde) - context.eps},
{'type': 'eq', 'fun': lambda x: x[-1] - context.pct_index})
@ Peter,
The first equality constraint is simply setting the sum of the portfolio weights to 1.0. The second inequality constraint comes right out of the paper referenced in the code, Section 4.2:
Li, Bin, and Steven HOI. "On-Line Portfolio Selection with Moving Average Reversion." The 29th International Conference on Machine Learning (ICML2012), 2012.
http://icml.cc/2012/papers/168.pdf
The final equality constraint sets the weight of the inverse ETF (it should really come after the first constraint, for clarity, since it is simply adding an additional constraint on the weights).
I'm no expert, but I think that the bounds limit the search space of the optimizer, so they may be treated differently than the constraints. You can try removing the final constraint, and adding:
bnds[-1] = [context.pct_index,context.pct_index]
This just says that the weight of the inverse ETF will be context.pct_index.
It is a bit of a mind-bender in that what's being minimized is the difference between the old portfolio vector (weights) and the new one (the squared Euclidean distance is the objective function, see http://en.wikipedia.org/wiki/Euclidean_distance). So, without the constraints, the old and new vectors would be the same.
Again, I claim no expertise, but I think that either CVXOPT or CVXPY would be more appropriate here, since they are formulated for strictly convex objective functions with linear equality and inequality constraints.
Hope this helps.
Grant