I'm not trying to do anything to complicated, I found what seemed to be a halfway decent alpha factor, and then plugged it into an algorithm and backtested it. Unfortunately, between alphalens where it was showing decent returns over the same timeframe, in the backtestor it failed rather completely. I have had this happen before with alphas that I have tested, and I am wondering what really does alphalens do to generate that returns graph, in comparison to what goes on in the backtest environment. I recognize that there is more happening with using constraints and MaximizeAlpha, but I don't think that would completely flip the returns, or maybe that is it.
Basically, are there settings you can use in the backtesting environment that give you a similar graph to what alphalens spits out, or are there some basic guidelines you can follow to point out factors like this that look alright in research but fail miserably in the algo.
Any help is appreciated