Example inspired by a scikit-learn example. I used this during my talk at the NYSE meetup earlier this week.
The idea is to predict hidden states in the daily price fluctuations and trading volume using a Hidden Markov Model (see the graphic). This model essentially assumes the existence of discrete hidden states. Each hidden state is associated with a certain probability of moving to another state at the next time point (thus, the current state is dependent on the previous one -- that's the Markov property). In addition, each hidden state is associated with emitting an observable event (in this case, fluctuation and volume) with a certain probability. This example tries to infer the hidden state transition probabilities, the observable events emitted, and then try to predict the hidden state of the current market.
Since we are continuously recomputing the HMM I set the previously learned means as a prior for the next model. So we are using the observed states we already learned for the next model.
Finally, this is so far just an analysis. Turning this into a trading strategy would require inferring what specific states mean and then place orders in response to that. This might not be quite as easy (you might want to look at the inferred means for this). But the Markov assumption that the current state depends on the previous will almost certainly be violated at least for the price fluctuations -- it is a well known fact that returns have almost 0 autocorrelation. Volume does, however, have autocorrelation so it might work better there. In addition, it's not clear that there are discrete states underneath. The state space could be continuous. If that's the case, a Kalman filter might be interesting to explore.