Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Deep Learning Price Action Lab (DLPAL) Software

Has anyone here reviewed, tested or used the software by Michael Harris named "Deep Learning Price Action Lab" (DLPAL) ?

Apparently it is intended to provide insight into any potentially tradable patterns of price action and then to generate the appropriate code for a number of platforms (WealthLab, MultiCharts, AmiBroker and Quantopian) to allow the user to build their own systems around whatever patterns that DLPAL might discover in the input data

So far i have only read some of the background / semi-promotional material by author Michael Harris, but it looks like it might be potentially interesting, especially as one of the output streams is written explicitly for use in Quantopian, or so the author claims. I hope that someone here at Q might be able to provide some further info about it, other than what is generally available on the Internet.

6 responses

Hi Tony,

Michael Harris and variations of his software have been around for a long time now. His focus has revolved and evolved around purely price action pattern recognition using various techniques over time genetic algo/processes, evolutionary algos, neural networks and now deep learning. Pretty much the gamut of what was hot or bleeding edge at that time, all attacking the prediction problem by exploiting nonlinear relationships of prices over time. As you and I are probably the senior citizens of this forum and veterans of the financial markets, I know you can relate to his evolution. While the majority of traders / analysts are still stuck in the linear world, it is refreshing to see nonlinear, non stationary practitioners like him approach the prediction problem outside the box but in a pragmatic manner.

I have tested and previewed his software on numerous occassions as I'm on his mailing list. His basic algo is a search algo that finds interrelationship between prices over time, i.e. Close today vs. High two days ago, Low today vs. Close three days ago, etc., to see if certain price patterns repeats itself into the future under certain (defined) market conditions. Quite frankly, the results are quite impressive but like most trading systems developed including mine, the main challenge is still overcoming the curse of nonstationarity (regime changes). I know we've discussed this before, if we are able to predict the right regime we're at (regime being mean reverting, trending or random), it would be easy and profitable to trade a system correctly with significant accuracy. This is the reason why I've shifted my research efforts to studying regime changes. I have encountered some studies on these using Hidden Markov Chains to identify volatility regimes and those that employs Fuzzy Logic to differentiate regimes.

Cheers, Tony! Nice to see you're still active with Q!

Hi James, thanks for your post & friendly greeting. Yes, i'm still very much busy researching & trading, but i tend to wander away from time to time to do other things and then come back to Q again to touch base with fellow "veterans & senior citizens" here :)) Cheers!

I can certainly relate to the general evolution from linear to non-linear, NNs, GA, GP, ML, new-generation NNs & DL, but despite getting computationally more "clever" over time, the practical question of course remains whether anything useful that is found will actually be robust enough to persist into the future .... or not?

We are obviously still very much "on the same page" here, and I would like to share / update some of my ideas with you. Like yourself, I also remain convinced that understanding market regimes is an excellent way to proceed, although with the caveat that volatility is only one of the possible aspects of "market regime", and perhaps not the most critical one.

I certainly have the notion, which i think we share, that characteristics of market behavior do tend to have some form of persistence. Intuitively i think this is right, although i have not been able to prove it rigorously because it is not as simple as, for example, measuring serial correlation. But, as Al Brooks puts it, markets do tend to keep doing whatever it is they are already doing: trends have some tendency to continue as trends, trading ranges tend to continue as trading ranges, breakouts from trading ranges generally tend to fail, and sharp "V" (or inverted) reversals are rarer than gradual degradations of trends into trading ranges and vice-versa. So my underlying "thesis" is to categorize or identify "the regime", and then assume it is potentially likely to continue, at least for a little while, and hopefully for long enough to get a trade based on the regime that has just been identified (by whatever means one chooses to use for that).

I have played with Markov Chains and i have though about using Fuzzy Logic but generally found that simple threshold logic and (regime) Transition Probabilities are adequate, within the range of uncertainty / variability / non-stationarity observed in practice in the markets. I can share more of my recent ideas on this with you if you like, but maybe taking it off-line from here. Please feel very welcome to follow up with me if you want to at [email protected]

Staying with the topic likely to be of general interest to most other readers and coming back to the thread title, i guess the question i want to ask you (and anyone else) is: Do you think DLPAL can add real value? Personally I am very open-minded to the idea, but i am also becoming skeptical that "specific pattern discovery" is perhaps not necessarily the best way to go moving forward. Sure, there are certainly some important efforts that have been made in this area e.g. Tom DeMark's indicators for example, but i now tend to wonder if specific price-action patterns of any kind are inherently doomed to have only finite lives, whereas deeper underlying forms of market behavior (that are inherently less clearly defined) may be more robust. Any thoughts on that? Anyone ?

Tony, regarding my thoughts on whether DLPAL can add real value, my honest answer would be I really don't know because I haven't test driven it yet. I have seen the architecture's diagram of it's deep learning process and it looks sound. I, too, am taking the deep learning path with my own custom built hybrid architecture done in Keras and Tensorflow that hopes to extract hidden price , fundamentals and alternative data relationships into a more accurate generalizations translated into predictions.

James, thanks for your comments. Using a combined approach with price, fundamentals & alternative data and a hybrid DL methodology as you mention is, i am sure, an excellent path to discovery.

Sometimes i reflect on the efforts that people like Altman (z-score) & Piotroski must have applied to get their formulae, and how much easier (after the initial set-up effort) and potentially better any comparable results could be nowadays with modern software tools.

I have not worked with Keras / Tensorflow yet, partly because i keep spending time working on technically simpler ideas that are a result of struggling back & forth with some underlying "philosophical" issues that i will share with you. I believe that the relatively complex sort of price patterns that people like Tom DeMark found in the past because of their unusual personal intellectual abilities (think the old Dustin Hoffman movie Rainman: "What are you doing?" "Counting cards") can certainly be found and exploited with DL and some (or most?) such patterns will therefore get arbitraged away in due course.

Much simpler phenomena such as trends that are fundamentally driven will not get arbitraged away because it doesn't matter how many people find them, even if the days of very long and very clean continuous trends (1970s & 80s) have gone. Many people say the markets tend to become more mean-reverting, and that notion might be true but i am skeptical if that is necessarily a very useful observation in general. Personally I don't buy the "TF is dead" notion; i just think that because of MR behavior, the useful time-frame for TF is very, very much shorter now. You can probably read between my lines here, but i will now also put some of my ideas explicitly by coming from a different direction.

A lot of conventional thinking regarding ML for trading (Howard Bandy for example) takes an approach something like this: Define a direction and minimum threshold size for trades and try to use ML to identify when trades in the given direction will be "sufficiently" profitable to be worthwhile, based on some set of precursor conditions that one is trying to discover. As i see it, that's more-or-less what i think most people working with ML try to do. However just like with the old-style NN work before it, the results are generally less spectacular than everyone hopes. People try to improve them using tools like confusion matrices & ROC charts to better discriminate between TP, FP, TN, FN, and use pre-processing for better feature engineering & extraction as inputs. OK, better but still doesn't work all that well.

My observation is that the performance of any sort of ML is critically dependent on the exact way in which the problem is formulated. I think the real issue here is that the problem is generally being formulated in the wrong way!! For real practical trading, it is not just a pure ML problem of improving Accuracy or F1 or Matthews Correlation Coefficient. In real-life trading there is a huge difference between FP and FN. Personally i really don't care if i miss good trades because there will always be lots more opportunities, as long as i haven't blown my account. What i personally most want to do in practice is to avoid bad trades. So any sort of confusion matrix or similar ML conceptual tool definitely needs a cost-benefit weight function to account for this. Try looking that up on the internet and see how many (few) traders are considering that aspect!

Now i make it simpler and take a step back from ML altogether. Instead of a conventional ML trading problem, if what i really want more than anything is to avoid bad trades, then i turn the whole thing back-to-front and make my objective to identify and avoid potentially bad trading situations. This includes any random & chaotic behavior, MR or any other behavior that consists of trends that are just too short or too small in amplitude to be exploitable, any rapid reversals and small-range bars, anything that produces whipsaws in real trading. Then i lump ALL of these things together and call them "non-tradable regimes". My contention is that identifying these and avoiding them is actually an easier problem than the conventional ML approach to trading.

After all the non-tradable stuff (which turns out to be most of the time) is removed, it is then trivially easy to see what the remaining tradable direction is ;-)

The difference between this and a conventional ML approach is not just one of semantics. By treating the whole trading problem backwards in this way, it becomes very much easier .... to the point that maybe it doesn't really even need much sophisticated ML ....... just as long as you a) believe in market regimes, and b) believe that missing good trades is actually ok.

I hope these ideas resonate with you. Comments / feedback welcome. Cheers, best wishes, Tony :)

Hi Tony,

One of the biggest challenge in applying deep learning or any AI based prediction model is how to postulate the trading strategy in form of presenting the input and output matrix that will give you the model's desired objective/s. The beauty of this type of approach is it does not have any a priori assumptions on the distribution of the input factors, just normalized and/or standardized for better processing within the maze of neurons (processing units) which acts like on/off switches that is aggregated into a weighing scheme that minimizes prediction error. Determining the target variable is equally challenging within the context of trading. You can do the simple binary classification ( 1 for up next day and 0 for down next day) , regression ( 5 day returns) or events based (multiclass classification: triple barrier method) or any other objective you want to measure. So you can imagine, this is not your typical plug and play approach and takes a lot of trial and error. But this could change very soon with the big push for automating the ML/AI process through optimization of hyperparameters and ensembling of models courtesy of present increase in computing resources and availability of cloud computing which has penetrated the mainstream. The congruence of computing power, data availability and AI solutions has produce real life breakthroughs such as self driving cars, facial/image recognition, computer vision, automation, natural language processing, etc, but the translation to financial trading is just starting to get there and may only currently be in the hands of a selected few.

Now i make it simpler and take a step back from ML altogether. Instead of a conventional ML trading problem, if what i really want more than anything is to avoid bad trades, then i turn the whole thing back-to-front and make my objective to identify and avoid potentially bad trading situations. This includes any random & chaotic behavior, MR or any other behavior that consists of trends that are just too short or too small in amplitude to be exploitable, any rapid reversals and small-range bars, anything that produces whipsaws in real trading. Then i lump ALL of these things together and call them "non-tradable regimes". My contention is that identifying these and avoiding them is actually an easier problem than the conventional ML approach to trading.

About two years ago I attended a presentation at the NY Trading Show featuring an AI based quant hedge fund. Two partners presented, one a financial trader, the other a game developer. Their prediction model is a layered Reinforcement Learning that is post process by a deep learner that spits out buy/sell decisions on thousands of stocks. Their actual performance as they claim in their 2+ years in operations was in the high 2 digit net returns. During the shoptalk after their presentation, I asked the game developer what the objective of his training was? His answer was the learning model was designed to avoid bad trading situation. I remember his analogy, we train it like making a person not cross a street when chances are if he did, he will get hit. I said to myself then, brilliant, we do this everyday when we cross a street. Our brain calculates and decide when it is safe or not safe to cross a street. We can design the model to process this way. So there is something in your direction of thinking! Cheers!

Thanks for your comments James. Very much appreciated.

Of course in the limit of safety = not taking bad trades, we have the situation of not trading at all, which of course violates TWR = (1 + Av Trade Rtn)^N.
So, more practically "not taking bad trades" means trading only infrequently and waiting extremely patiently for the very BEST of trading opportunities.

Reminds me of a time many years ago where i was friendly with a guy who worked for a futures broker. While he was employed there he gave me the "usual party line" about how good it was for their clients. However after he left, he was a lot more open about the reality. When i asked him what % of their trading clients ended up eventually being losers, he said "Actually .... all of them ..... its just that the better ones take longer to lose all their money". Then he thought for a while and said: "No, wait, there is ONE client who is a winner. A Chinese guy who watches very very carefully, listens to but then ignores all of the buy & sell recommendations, and only trades at most 3 or 4 times a year using futures options. Consistently wins and makes very good money".

Sometimes i think most people, including ML aficionados and Q (and me), are probably over-trading. Of course the brokers & "liquidity providers" love it.
Cheers, best regards. Tony :-)