When optimizing a trading algorithm, I often run multiple backtests with the same code and only change one parameter at a time. I then copy the results of this backtest into a spreadsheet and analyze the results. I use the correlation function to determine how my parameters are affecting the results, with the goal of fine tuning the parameters with the best results.
After learning more about the Research functionality of Quantopian, I think it would be a great place to automate this process. Let's consider a very simple algorithm, that goes long when the price is above a moving average, and goes short when the price is below that moving average. In this example, the parameter that needs tested is "which moving average provides the best results"?
Algorithm code:
context.Moving_Average_Days = 20
if stock.price > stock.mavg(Moving_Average_Days):
order_target_percent(security, 1)
else:
order_target_percent(security, -1)
Research code:
for Moving_Average_Days in range(2, 200):
#
# Run_Backtest(algorithm_ID, start_date, end_date, capital, user_variables)
#
Run_Backtest(142342, 2014-01-01, 2014-12-30, 100000, Moving_Average_Days)
The code above would run 199 backtests, each with a different moving average value. Once these backtests were complete, you could use the Research "get_backtest()" function to analyze the results and find the optimal moving average.
I realize that allowing people to run hundreds of backtests will add a lot of load to the Quantopian servers. I suggest that this can be mitigated by allowing users to submit "jobs" in Research, and then be notified by email when the jobs are finished (which could be days later). This will allow Quantopian to only run Research initiated backtests using excess server capacity, to minimize costs.
What do you guys think?