There is a common belief that very old data may not show similar pattern as of today's. In a quant's world is not very uncommon to say that algorithm which used to perform very well two years back are useless today. This in a way tells not to back test on too old data.
Whereas there is another school of thought which tells to backtest as much back as possible so as to capture as many uncertainties like say the May 6, 2010 flash crash. What do you guys feel ?