Notebook
In [1]:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pykalman import KalmanFilter
import statsmodels.api as sm

<center>

Using the Kalman Filter in Algorithmic Trading

<center>

<center>

Dr. Aidan O'Mahony

QuantCon Singapore, November 2016

</center>

Introduction

  • Markets are dynamic and ever changing
  • Traders and trading algorithms must adapt
    • Remain profitable
    • Reduce risk and market exposure
  • How do we build adaptive algorithms?
    • Update parameters monthly, weekly etc.
    • Rolling windows of data to calculate parameters
    • Machine learning techniques
  • Demonstrate an adaptative strategy
    • Simple pair trading abitrage strategy
    • Apply Kalman filter

Pair Trading

  • Common technique involving two or more assets
  • Assests have a conintegrating and mean reverting relationship
  • Exploit mispricing of assets
  • Pair trading involves the following steps:
    1. Identify possible pairs of assets
    2. Construct spread from assest relationship
    3. Test for cointegration
    4. Open long-short position when mispricing occurs
    5. Profit from future correction to mispricing

Pair Trading

  • Pros:
    • Negligible beta and therefore minimal exposure to the market
    • Returns are uncorreclated to market returns
  • Cons:
    • Implementation and execution is relatively complex
    • Identification of pairs is difficult and computationally expensive
    • Cointegrating relationshiop can change or break at any time

PEP - KO Pair Trade Example

<center> <center>

PEP - KO Pair Trade Example

In [2]:
secs = ['PEP', 'KO']
data = get_pricing(
    symbols(secs), start_date='2006-1-1', end_date='2008-8-1', 
    fields='close_price', frequency='daily')
data.columns = [sec.symbol for sec in data.columns]
data.index.name = 'Date'
In [3]:
(1 + data.pct_change()).cumprod().plot();
plt.ylabel('Cumulative Return');
  • Plot data and use colormap to indicate the date each point corresponds to

PEP - KO Relationship

In [4]:
cm = plt.get_cmap('jet')
colors = np.linspace(0.1, 1, len(data))
sc = plt.scatter(data[secs[0]], data[secs[1]], s=30, c=colors, cmap=cm, edgecolor='k', alpha=0.7)
cb = plt.colorbar(sc)
cb.ax.set_yticklabels([str(p.date()) for p in data[::len(data)//9].index])
plt.xlabel(secs[0])
plt.ylabel(secs[1]);

Construction of Spread

  • Linear regression:
$$y({\bf x}) = \beta^T {\bf x} + \epsilon$$$$\beta^T = (\beta_0, \beta_1, \ldots, \beta_p)$$$$\epsilon \sim \mathcal{N}(\mu, \sigma^2)$$
  • For one-dimensional case:
$$\beta^T = (\beta_0, \beta_1)$$$${\bf x} = \begin{pmatrix} 1 \\ x \end{pmatrix}$$
  • In our example:
$$x = p^\text{PEP}$$$$y({\bf x}) = p^\text{KO}$$
  • Spread is constructed by:
$$\epsilon = p^\text{KO} - (\beta_0, \beta_1 ) \begin{pmatrix} 1 \\ p^\text{PEP} \end{pmatrix}$$
$$\epsilon = p^\text{KO} - \beta_1 p^\text{PEP} - \beta_0$$
In [5]:
x = sm.add_constant(data[secs[0]], prepend=False)
ols = sm.OLS(data[secs[1]], x).fit()
beta = ols.params
y_fit = [x.min().dot(beta), x.max().dot(beta)]
In [6]:
print ols.summary2()
                  Results: Ordinary least squares
==================================================================
Model:              OLS              Adj. R-squared:     0.885    
Dependent Variable: KO               AIC:                2043.5547
Date:               2016-11-04 06:16 BIC:                2052.5086
No. Observations:   650              Log-Likelihood:     -1019.8  
Df Model:           1                F-statistic:        5002.    
Df Residuals:       648              Prob (F-statistic): 6.54e-307
R-squared:          0.885            Scale:              1.3539   
-------------------------------------------------------------------
            Coef.    Std.Err.     t      P>|t|    [0.025    0.975] 
-------------------------------------------------------------------
PEP          0.6419    0.0091   70.7230  0.0000    0.6240    0.6597
const      -16.7652    0.5982  -28.0280  0.0000  -17.9397  -15.5906
------------------------------------------------------------------
Omnibus:              4.083         Durbin-Watson:           0.075
Prob(Omnibus):        0.130         Jarque-Bera (JB):        4.052
Skew:                 -0.193        Prob(JB):                0.132
Kurtosis:             2.995         Condition No.:           864  
==================================================================

Linear Regression

In [7]:
cm = plt.get_cmap('jet')
colors = np.linspace(0.1, 1, len(data))
sc = plt.scatter(data[secs[0]], data[secs[1]], s=50, c=colors, cmap=cm, 
                 edgecolor='k', alpha=0.7, label='Price Data')
plt.plot([x.min()[0], x.max()[0]], y_fit, '--b', linewidth=3, label='OLS Fit')
plt.legend()
cb = plt.colorbar(sc)
cb.ax.set_yticklabels([str(p.date()) for p in data[::len(data)//9].index])
plt.xlabel(secs[0])
plt.ylabel(secs[1]);

PEP - KO Spread

In [8]:
spread = pd.DataFrame(data[secs[1]] - np.dot(sm.add_constant(data[secs[0]], prepend=False), beta))
spread.columns = [secs[0] + '-' + secs[1] + ' Spread']
In [9]:
spread.plot(style=['g']);

Test for Cointegration

In [10]:
# check for cointegration
adf = sm.tsa.stattools.adfuller(spread['PEP-KO Spread'], maxlag=1)
print 'ADF test statistic: %.02f' % adf[0]
print 'p-value: %.03f' % adf[1]
ADF test statistic: -3.28
p-value: 0.016
  • Augmented Dickey-Fuller test for cointegration:
    • ADF test statistic: -3.28
    • p-value: 0.016
In [11]:
spread['Middle'] = spread['PEP-KO Spread'].mean()
std = spread['PEP-KO Spread'].std()
spread['Upper'] = spread['Middle'] + std
spread['Lower'] = spread['Middle'] - std

Trading Rules

In [12]:
spread.plot(style=['g', '--b', '--y', '--y']);

Trading Rules

In [13]:
trades = pd.DataFrame(np.nan, index=spread.index, columns=['Buy', 'Sell'])
In [14]:
trades['Buy'][(spread['PEP-KO Spread'].shift(1) > spread['Lower']) & 
                (spread['PEP-KO Spread'] < spread['Lower'])] = 1
trades['Buy'][(spread['PEP-KO Spread'].shift(1) < spread['Middle']) & 
                (spread['PEP-KO Spread'] > spread['Middle'])] = 0

trades['Buy'].ffill(inplace=True)
trades['Buy'] = trades['Buy'].diff().shift(-1)
trades['Buy'][trades['Buy'] == 0] = np.nan
trades['Buy'][trades['Buy'] == -1] = 0
trades['Buy'] *= spread['Lower']

trades['Sell'][(spread['PEP-KO Spread'].shift(1) < spread['Upper']) & 
                (spread['PEP-KO Spread'] > spread['Upper'])] = 1
trades['Sell'][(spread['PEP-KO Spread'].shift(1) > spread['Middle']) & 
                (spread['PEP-KO Spread'] < spread['Middle'])] = 0

trades['Sell'].ffill(inplace=True)
trades['Sell'] = trades['Sell'].diff().shift(-1)
trades['Sell'][trades['Sell'] == 0] = np.nan
trades['Sell'][trades['Sell'] == -1] = 0
trades['Sell'] *= spread['Upper']
In [15]:
spread.plot(style=['g', '--b', '--y', '--y'])
plt.plot(trades['Buy'], 'm^', markersize=12, label='Buy')
plt.plot(trades['Sell'], 'cv', markersize=12, label='Sell')
plt.legend(loc=0);

Out of Sample

In [16]:
secs = ['PEP', 'KO']
data_oos = get_pricing(
    symbols(secs), start_date='2008-8-1', end_date='2010-1-1', 
    fields='close_price', frequency='daily')
data_oos.columns = [sec.symbol for sec in data_oos.columns]
data_oos.index.name = 'Date'
In [17]:
spread_oos = spread.reindex(spread.index + data_oos.index)
In [18]:
spread_oos['PEP-KO Spread OOS'] = data_oos[secs[1]] - np.dot(
        sm.add_constant(data_oos[secs[0]], prepend=False), beta)
In [19]:
spread_oos[['Middle', 'Upper', 'Lower']] = spread_oos[['Middle', 'Upper', 'Lower']].ffill()
In [20]:
spread_oos.plot(style=['g', '--b', '--y', '--y', 'r']);

Why?

In [21]:
data_all = data.append(data_oos)
cm = plt.get_cmap('jet')
colors = np.linspace(0.1, 1, len(data_all))
sc = plt.scatter(data_all[secs[0]], data_all[secs[1]], s=50, c=colors, cmap=cm, 
                 edgecolor='k', alpha=0.7, label='Price Data')
plt.plot([x.min()[0], x.max()[0]], y_fit, '--b', linewidth=3, label='OLS Fit')
plt.legend()
cb = plt.colorbar(sc)
cb.ax.set_yticklabels([str(p.date()) for p in data_all[::len(data_all)//9].index])
plt.xlabel(secs[0])
plt.ylabel(secs[1]);

Solution

  • Dynamically update beta coefficients
  • How?
    • Calculate OLS regression coeffecients every n days
    • Use moving window data to peform OLS regression
    • State space model of OLS regression (e.g. Kalman Filter)

Kalman Filter

  • Named after Rudolf E. Kálmán and is over 50 years old
  • Still one of the most important and common data fusion algorithms in use today
    • Small computational requirements
    • Elegant recursive properties
    • Optimal estimator for one-dimensional linear systems with Gaussian error statistics
  • Typical uses and applications:
    • Smoothing noisy data
    • Providing estimates of parameters of interest
    • Global positioning system receivers
    • Smoothing the output from laptop trackpads
    • Algorithmic trading
  • Everyone in the room has probably inadvertently used a Kalman filter today
  • The most famous early use of the Kalman filter was in the Apollo navigation computer that took Neil Armstrong to the moon, and (most importantly) brought him back

Kalman Filter

  • State space model
  • Describe set of hidden state variables, $\theta_t$
  • Transition equation
$$\theta_t = G_t \theta_{t-1} + w_t$$
  • Observation equation
$$y_t = F_t \theta_t + v_t$$
  • $w_t$ and $v_t$ are Guassian white noise
  • Recall the linear regression relationship
$$y({\bf x}) = \beta^T {\bf x} + \epsilon$$$$p^\text{KO} = (\beta_0, \beta_1 ) \begin{pmatrix} 1 \\ p^\text{PEP} \end{pmatrix} + \epsilon$$
  • Apply this relationship to Kalman filter by
$$\text{Hidden state: }\theta_t = \beta_t$$$$\text{Transistion matrix: }G_t = {\bf I}$$$$\text{Observations: }y_t = p^\text{KO}$$$$\text{Observation matrix: }F_t = \begin{pmatrix} 1 \\ p^\text{PEP} \end{pmatrix}$$
  • Transition equation
$$\beta_{t+1} ={\bf I} \beta_{t} + w_t$$
  • Observation equation
$$p^\text{KO} = (\beta_0, \beta_1 ) \begin{pmatrix} 1 \\ p^\text{PEP} \end{pmatrix} + v_t$$
In [22]:
obs_mat = sm.add_constant(data_all[secs[0]].values, prepend=False)[:, np.newaxis]

kf = KalmanFilter(n_dim_obs=1, n_dim_state=2, # y is 1-dimensional, (alpha, beta) is 2-dimensional
                  initial_state_mean=np.ones(2),
                  initial_state_covariance=np.ones((2, 2)),
                  transition_matrices=np.eye(2),
                  observation_matrices=obs_mat,
                  observation_covariance=10**2,
                  transition_covariance=0.01**2 * np.eye(2))
In [23]:
state_means, state_covs = kf.filter(data_all[secs[1]])
In [33]:
beta_kf = pd.DataFrame({'Slope': state_means[:, 0], 'Intercept': state_means[:, 1]},
                   index=data_all.index)

Dynamic Beta from Kalman Filter

In [35]:
beta_kf.plot(subplots=True);
In [44]:
# visualize the correlation between assest prices over time
cm = plt.cm.get_cmap('jet')
dates = [str(p.date()) for p in data_all[::len(data_all)/10].index]
colors = np.linspace(0.1, 1, len(data_all))
sc = plt.scatter(data_all[secs[0]], data_all[secs[1]], 
                 s=50, c=colors, cmap=cm, edgecolor='k', alpha=0.7)
cb = plt.colorbar(sc)
cb.ax.set_yticklabels([str(p.date()) for p in data_all[::len(data_all)//9].index]);
plt.xlabel(secs[0])
plt.ylabel(secs[1])

# add regression lines
step = 25
xi = np.linspace(data_all[secs[0]].min(), data_all[secs[0]].max(), 2)
colors_l = np.linspace(0.1, 1, len(state_means[::step]))
for i, b in enumerate(state_means[::step]):
    plt.plot(xi, b[0] * xi + b[1], alpha=.5, lw=2, c=cm(colors_l[i]))

PEP - KO Spread with dynamic beta

In [27]:
spread_kf = data_all[secs[1]] - data_all[secs[0]] * beta_kf['beta'] - beta_kf['alpha']
In [51]:
spread_kf.plot();
In [50]:
spread_oos.plot(style=['g', '--b', '--y', '--y', 'r']);
spread_kf.plot(label='Dynamic PEP-KO Spread')
plt.legend(loc=0)
# spread['PEP-KO Spread'].plot()
# spread_oos['PEP-KO Spread OOS'].plot()
# plt.ylim((-3, 3));

Further Reading

  1. O'Mahony, A. (2014). Online Linear Regression using a Kalman Filter
  2. Faragher, R. (2012). Understanding the Basis of the Kalman Filter Via a Simple and Intuitive Derivation [Lecture Notes]. IEEE Signal Processing Magazine, 29(5), 128-132. doi:10.1109/msp.2012.2203621
  3. Halls-Moore, M. (2016). Dynamic Hedge Ratio Between ETF Pairs Using the Kalman Filter
  4. Halls-Moore, M. (2014). Backtesting An Intraday Mean Reversion Pairs Strategy Between SPY And IWM

<center>

Thank You

<center>

In [ ]: