One way to get a leg up when researching a trading strategy is to look for alpha in datasets that might be used less often than pricing data. The data sets that receive the most attention are the least likely to have much signal left over. We believe that incorporating non-pricing datasets into your models is one of the single biggest improvements you can make towards finding trading signals. Rather than trying to incorporate the signals raw into a model, the best approach is to develop a hypothesis of how the data might be used to forecast returns. Towards that end let's show an example workflow that uses Blaze on a partner dataset
To start out, let's investigate a partner dataset using Blaze. Blaze allows you to define expressions for selecting and transforming data without loading all of the data into memory. This makes it a nice tool for interacting with large amounts of data in research.
import matplotlib.pyplot as plt
import pandas as pd
# http://blaze.readthedocs.io/en/latest/index.html
import blaze as bz
from zipline.utils.tradingcalendar import get_trading_days
from quantopian.interactive.data.alpha_vertex import precog_top_500 as dataset
Interactive datasets are Blaze expressions. Blaze expressions have a similar API to pandas, with some differences.
type(dataset)
Let's start by looking at a sample of data from the Alpha Vertex PreCog dataset for AAPL. PreCog is a machine learning model that incorporates hundreds of data points covering the economy, company financial performance market data, and investor sentiment to generate its outlooks.
aapl_sid = symbols('AAPL').sid
# Look at a sample of AAPL sentiment data starting from 2013-12-01.
dataset[(dataset.sid == aapl_sid) & (dataset.asof_date >= '2016-01-01')].peek()
Let's see how many securities are covered by this dataset since January 2016.
num_sids = bz.compute(dataset.sid.distinct().count())
print 'Number of sids in the data: %d' % num_sids
Let's go back to AAPL and let's look at the signal each day. To do this, we can create a Blaze expression that selects trading days and another for the AAPL sid
(24).
# Mask for AAPL.
stock_mask = (dataset.sid == aapl_sid)
# Blaze expression for AAPL sentiment on trading days between 12/2013 and 12/2014
av_expr = dataset[stock_mask & (dataset.asof_date >= '2016-01-01')].sort('asof_date')
Compute the expression. This returns the result in a pandas DataFrame.
av_df = bz.compute(av_expr)
Plot the PreCog signal for AAPL.
av_df.plot(x='asof_date', y='predicted_five_day_log_return')
Great! Now let's use this data in a pipeline.
Now that we have a dataset that we want to use, let's use it in a pipeline. In addition to the PreCog dataset, we will also use the EventVestor Earnings Calendar dataset to avoid trading around earnings announcements, and the EventVestor Mergers & Acquisitions dataset to avoid trading acquisition targets. We will work with the free versions of these datasets.
Specifically, let's build a pipeline that ranks stocks by the prediction they received from the PreCog model. Let's also add a filter where we only consider stocks in the Q1500US that have had the daily direction of their prediction (positive or negative) correct at least 8 out of the last 15 trading days (above 50%).
This should leave us with a large basket of stocks to trade, which is good for both risk management and capacity considerations.
from quantopian.pipeline import Pipeline, CustomFactor
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import SimpleMovingAverage, RollingLinearRegressionOfReturns
from quantopian.pipeline.filters.morningstar import Q1500US
from quantopian.pipeline.classifiers.morningstar import Sector
# Sentdex Sentiment free from 15 Oct 2012 to 1 month ago.
from quantopian.pipeline.data.sentdex import sentiment
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.data.alpha_vertex import precog_top_500 as precog
# EventVestor Earnings Calendar free from 01 Feb 2007 to 1 year ago.
from quantopian.pipeline.factors.eventvestor import (
BusinessDaysUntilNextEarnings,
BusinessDaysSincePreviousEarnings,
)
# EventVestor Mergers & Acquisitions free from 01 Feb 2007 to 1 year ago.
from quantopian.pipeline.filters.eventvestor import IsAnnouncedAcqTarget
from quantopian.pipeline.factors import BusinessDaysSincePreviousEvent
import numpy as np
import pandas as pd
class PredictionQuality(CustomFactor):
"""
Create a customized factor to calculate the prediction quality
for each stock in the universe.
Compares the percentage of predictions with the correct sign
over a rolling window (3 weeks) for each stock.
"""
# data used to create custom factor
inputs = [precog.predicted_five_day_log_return, USEquityPricing.close]
# change this to what you want
window_length = 15
def compute(self, today, assets, out, pred_ret, px_close):
log_ret5 = np.log(px_close) - np.log(np.roll(px_close, 5, axis=0))
log_ret5 = log_ret5[5:]
n = len(log_ret5)
# predicted returns
pred_ret = pred_ret[:n]
# number of predictions with incorrect sign
err = np.absolute((np.sign(log_ret5) - np.sign(pred_ret)))/2.0
# custom quality measure
pred_quality = (1 - pd.DataFrame(err).ewm(min_periods=n, com=n).mean()).iloc[-1].values
out[:] = pred_quality
def make_pipeline():
"""
Dynamically apply the custom factors defined below to
select candidate stocks from the PreCog universe
"""
pred_quality_thresh = 0.5
# Filter for stocks that are not within 2 days of an earnings announcement.
not_near_earnings_announcement = ~((BusinessDaysUntilNextEarnings() <= 2)
| (BusinessDaysSincePreviousEarnings() <= 2))
# Filter for stocks that are announced acquisition target.
not_announced_acq_target = ~IsAnnouncedAcqTarget()
# Our universe is made up of stocks that have a non-null sentiment & precog signal that was
# updated in the last day, are not within 2 days of an earnings announcement, are not announced
# acquisition targets, and are in the Q1500US.
universe = (
Q1500US()
& precog.predicted_five_day_log_return.latest.notnull()
& not_near_earnings_announcement
& not_announced_acq_target
)
# Prediction quality factor.
prediction_quality = PredictionQuality(mask=universe)
# Filter for stocks above the threshold quality.
quality= prediction_quality > pred_quality_thresh
latest_prediction = precog.predicted_five_day_log_return.latest
non_outliers = latest_prediction.percentile_between(1,99, mask=quality)
normalized_return = latest_prediction.zscore(mask=non_outliers)
normalized_prediction_rank = normalized_return.rank()
## create pipeline
columns = {
'av_rank': normalized_prediction_rank,
}
pipe = Pipeline(columns=columns, screen=universe)
return pipe
result = run_pipeline(make_pipeline(), start_date='2015-02-01', end_date='2017-03-01')
result.head()
Now we can analyze our av_rank
factor with Alphalens. To do this, we need to get pricing data using get_pricing
.
# All assets that were returned in the pipeline result.
assets = result.index.levels[1].unique()
# We need to get a little more pricing data than the length of our factor so we
# can compare forward returns. We'll tack on another month in this example.
pricing = get_pricing(assets, start_date='2015-02-01', end_date='2017-04-01', fields='open_price')
Then we run a factor tearsheet on our factor. We will analyze 3 quantiles, looking at 1, 5, and 10-day lookahead periods.
If you are interested in learning more about factor tearsheets and how to analyze them, check out the Factor Analysis lecture in the Quantopian lecture series.
import alphalens
factor_data = alphalens.utils.get_clean_factor_and_forward_returns(
factor=result.av_rank,
prices=pricing,
quantiles=5,
)
alphalens.tears.create_full_tear_sheet(
factor_data,
)
From this it looks like there's a relationship between the top quantile of our factor and positive returns as well as the bottom quantile and negative returns over a 1 day horizon.
Let's try to capitalize on this by implementing a strategy that opens long positions in the top quantile of stocks and short positions in the bottom quantile of stocks. Let's invest half of our portfolio long and half short, and equally weight our positions in each direction.
Before moving to the IDE, let's make some small changes to the pipeline we defined earlier. This will make it easier to order stocks based on quantile.
def make_pipeline():
"""
Dynamically apply the custom factors defined below to
select candidate stocks from the PreCog universe
"""
pred_quality_thresh = 0.5
# Filter for stocks that are not within 2 days of an earnings announcement.
not_near_earnings_announcement = ~((BusinessDaysUntilNextEarnings() <= 2)
| (BusinessDaysSincePreviousEarnings() <= 2))
# Filter for stocks that are announced acquisition target.
not_announced_acq_target = ~IsAnnouncedAcqTarget()
# Our universe is made up of stocks that have a non-null sentiment & precog signal that was
# updated in the last day, are not within 2 days of an earnings announcement, are not announced
# acquisition targets, and are in the Q1500US.
universe = (
Q1500US()
& precog.predicted_five_day_log_return.latest.notnull()
& not_near_earnings_announcement
& not_announced_acq_target
)
# Prediction quality factor.
prediction_quality = PredictionQuality(mask=universe)
# Filter for stocks above the threshold quality.
quality= prediction_quality > pred_quality_thresh
latest_prediction = precog.predicted_five_day_log_return.latest
non_outliers = latest_prediction.percentile_between(1,99, mask=quality)
normalized_return = latest_prediction.zscore(mask=non_outliers)
normalized_prediction_rank = normalized_return.rank()
prediction_rank_quantiles = normalized_prediction_rank.quantiles(5)
longs = prediction_rank_quantiles.eq(4)
shorts = prediction_rank_quantiles.eq(0)
# We will take market beta into consideration when placing orders in our algorithm.
beta = RollingLinearRegressionOfReturns(
target=symbols('SPY'),
returns_length=5,
regression_length=260,
mask=(longs | shorts)
).beta
# We will actually be using the beta computed using Bloomberg's computation.
# Ref: https://www.lib.uwo.ca/business/betasbydatabasebloombergdefinitionofbeta.html
bb_beta = (0.66 * beta) + (0.33 * 1.0)
## create pipeline
columns = {
'longs': longs,
'shorts': shorts,
'market_beta': bb_beta,
'sector': Sector(),
}
pipe = Pipeline(columns=columns, screen=(longs | shorts))
return pipe
df = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
Now that we have a good looking model, we can backtest it. Backtesting is a good check to see how the model survives real world conditions like slippage and commissions.
The backtests for this strategy is linked below in the forum post.
Let's load our backtest result and run it through a tearsheet using pyfolio.
bt = get_backtest('5947e4e609c7d969f9c2a62a')
Some key takeaways from the tearsheet:
bt.create_full_tear_sheet()