@Mattias, interesting observations.
Over-fitting, in this case, might be a strong word. The program's code was not altered, nor its trading logic.
The intention was to appraise the nature of that at a later time. Once I've modified the code, maybe I'll encounter something that says it is a “misfit” for the task required. For now, I do not know if it is good or bad.
I did my usual initial exploration phase of someone else's program where I apply functions to the strategy's payoff matrix to see if it is scalable and sustainable. Most often, if a strategy cannot scale up and last for 10 years and more, I rapidly lose interest.
What I did was relatively simple, see it as a guiding function to the inventory matrix: \( \sum (k(t) \cdot H \cdot \Delta P) \) resulting in a positive impact on the outcome. Such modifications are available to anyone. It is where you preset part of your trading strategy's behavior.
I have not touched the code yet. Nonetheless, I do not expect the future to be like the past, and therefore, the strategy still has to prove itself going forward. At least, it showed that “under its set of conditions and trading procedures” it managed to outperform over those 10.25 years and doing over 4,000+ trades in the process. That is not negligible.
Will those “conditions” be the same going forward? As you know, probably not. However, without viewing the relevance of the code, I would be hard-pressed to answer. But, I do expect to see the strategy follow its code and behave in about the same manner as it did in the past. And that is generate about the same number of trades per time interval. This will tend to a constant as in: \(\Delta (n \cdot \bar {x}) / \Delta t \to constant \). And because of this, we will see a return degradation going forward. Measures will be needed to overcome this inherent alpha decay. But that is part of the code modification phase.
This exercise makes you wonder on the value of what is being missed by not exploring the limits of a trading strategy and what these controls \( k(t) \) might really be worth?
This could be viewed as opportunity costs for not having pursued those limits, barriers, or what have you. And, it could be expressed as having value, explicitly:
$$ \sum (k(t) \cdot H \cdot \Delta P) - \sum (H \cdot \Delta P) $$
You could take what you or I showed as performance improvements and compare that to the original script's payoff matrix to get an idea of these opportunity costs. This, just for a few numbers in a program that we did not even design ourselves.
Thanks to @Maxim for updating the strategy, @Michelle for removing the deprecated stuff, and @Naoki for originally putting it out for all to play with to explore its possibilities. There is more that can be done, but that will require reading and understanding the code, the core of the program which could lead to something “over-fitted”, and maybe not.
I find that term totally over-rated. You know the future will be different, and that, de facto, your trading strategy will process that data differently, and yet, people insist that their trading strategy, which will stay the same, somehow will behave differently in spite of knowing that the data will change.
If I had to use such a term, I would say that the original trading strategy was mostly under-fitted since so much more could have been extracted with relative ease, and by modifying just a few numbers at that.