Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
A Non-Convex Optimization Work Around/Avoiding Local Minima

I made a simple class that runs many optimizations simultaneously over a specified range. This way, only one or two of the optimizations need to stumble into the global minima while the others get lost. Example below.

import pandas as pd  
import numpy as np  
import math  
from scipy import optimize  
import threading  
from threading import Thread  
import random  
import time


class threaded_opt:

    #initializations  
    def __init__(self, function, min_x0, max_x0, num_threads):

        if (len(max_x0) != len(min_x0)):  #Error checking  
            print ("ERROR: max_x0 and min_x0 are not the same length")  
            return  
        self.function = function  
        self.min_x0 = min_x0  
        self.max_x0 = max_x0  
        self.num_threads = num_threads  
        self.lock = threading.Lock()  
        self.best_result = None  
        self.all_results_of_run = []

    #Run the optimization.  Make the threads here.  
    def run(self):  
        self.best_result = optimize.minimize(self.function, self.min_x0)  #Get an initial value to compare in traget_function  
        self.all_results_of_run = []  #Clear the all results list for each run  
        for x in range(self.num_threads):  #Make the threads and start them off  
            t = Thread(target=self.target_function, args=(x,))  
            t.start()

    #Each thread goes through this.  Find a random point in the space given, and optimize from it  
    def target_function(self, thread_ID):

        random_x0 = []  
        for x in range(len(self.max_x0)):  #Generate a random point in the allowed space  
            random_x0.append(random.uniform(self.min_x0[x], self.max_x0[x]))  
        result = optimize.minimize(self.function, random_x0)  #Optimize from that point  
        with self.lock:  #So the threads don't interfere with eachother  
            self.all_results_of_run.append(result)  
            if (result.fun < self.best_result.fun):  
                self.best_result = result

    #Returns the proportion of optimizations that returned the correct result  
    def score(self):  
        correct = 0  
        for x in self.all_results_of_run:  
            if (x.fun == self.best_result.fun):  
                correct += 1  
        return (float(correct)/self.num_threads)

3 responses

Here, the function being minimized resembles an egg carton that has been bent into a bowl shape. It has an incredible amount of local minima. scipy.optimize.minimze() had a lot of trouble with this one, but this class nails it. The global min is at -8.8...

#An example function to evaluate.  
#It is a 3D graph that resembles and bowl shaped egg carton.  
def evaluate(box):  
    x = box[0]  
    y = box[1]  
    a = 5  
    b = 0.1

            #Egg Carton                     #Paraboloid  
    return (a*(math.sin(x)+math.cos(y))) + (b*((x**2)+(y**2)))


opt = threaded_opt(evaluate,[-1000,-1000],[1000,1000], 1000)  
opt.run()  
print ("The best result is", opt.best_result.fun)  
print ("Only", 100*opt.score(), "percent of the optimizations were correct")  

It outputs:
The best result is -8.813796845199237 Only 0.3 percent of the optimizations were correct

Unfortunately, some of the modules needed aren't yet supported. But I hope someone can use it

EDIT: Picture of function being minimized with code I would've done this in a notebook but the 3D graphing modules are also not supported

Hmm which optimizer is it picking? Couldn't one just use a particle swarm or something if it's not a friendly function?

Simon,
Sorry it took so long. Being new I admittedly had to learn what PSO was. But frankly, yes. I used PySwarm for the particle swarm optimization and it works extremely well. Had I known about it prior I wouldn't have gone through the trouble of making this.