In this notebook, we'll take a look at PsychSignal's StockTwits Trader Mood (All Fields) dataset, available on the Quantopian Store. This dataset spans 2009 through the current day, and documents the mood of traders based on their messages.
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.
To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set.
With preamble in place, let's get started:
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
> from odo import odo
> odo(expr, pandas.DataFrame)
Pipeline Overview
section of this notebook or head straight to Pipeline Overview¶# import the free sample of the dataset
from quantopian.interactive.data.psychsignal import stocktwits_free as dataset
# or if you want to import the full dataset, use:
# from quantopian.interactive.data.psychsignal import stocktwits
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
import matplotlib.pyplot as plt
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
There are two versions of each data set from PsychSignal. A simple version with fewer fields and full version with more fields. This is an basic data set with fewer fields.
Let's go over the columns:
We've done much of the data processing for you. Fields like timestamp
and sid
are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid
across all our equity databases.
We can select columns and rows with ease. Below, we'll fetch all rows for Apple (sid 24) and explore the scores a bit with a chart.
# Filtering for AAPL
aapl = dataset[dataset.sid == 24]
aapl_df = odo(aapl.sort('asof_date'), pd.DataFrame)
plt.plot(aapl_df.asof_date, aapl_df.bull_scored_messages, marker='.', linestyle='None', color='r')
plt.plot(aapl_df.asof_date, pd.rolling_mean(aapl_df.bull_scored_messages, 30))
plt.xlabel("As Of Date (asof_date)")
plt.ylabel("Count of Bull Messages")
plt.title("Count of Bullish Messages for AAPL")
plt.legend(["Bull Messages - Single Day", "30 Day Rolling Average"], loc=2)
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
> from quantopian.pipeline.data.psychsignal import (
> stocktwits_free
> )
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
> pipe.add(stocktwits_free.total_scanned_messages.latest, 'total_scanned_messages')
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms
# Using the full paid dataset in your pipeline algo
# from quantopian.pipeline.data.psychsignal import stocktwits
# Using the free sample in your pipeline algo
from quantopian.pipeline.data.psychsignal import stocktwits_free
Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
for data in (stocktwits_free ,):
_print_fields(data)
print "---------------------------------------------------\n"
Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread: https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
# Let's see what this data looks like when we run it through Pipeline
# This is constructed the same way as you would in the backtester. For more information
# on using Pipeline in Research view this thread:
# https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
pipe = Pipeline()
pipe.add(stocktwits_free.total_scanned_messages.latest,
'total_scanned_messages')
pipe.add(stocktwits_free.bear_scored_messages .latest,
'bear_scored_messages ')
pipe.add(stocktwits_free.bull_scored_messages .latest,
'bull_scored_messages ')
pipe.add(stocktwits_free.bull_bear_msg_ratio .latest,
'bull_bear_msg_ratio ')
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid &
(stocktwits_free.total_scanned_messages.latest>20))
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
Taking what we've seen from above, let's see how we'd move that into the backtester.
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
# For use in your algorithms
# Using the full paid dataset in your pipeline algo
# from quantopian.pipeline.data.psychsignal import stocktwits
# Using the free sample in your pipeline algo
from quantopian.pipeline.data.psychsignal import stocktwits_free
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add pipeline factors
pipe.add(stocktwits_free.total_scanned_messages.latest,
'total_scanned_messages')
pipe.add(stocktwits_free.bear_scored_messages .latest,
'bear_scored_messages ')
pipe.add(stocktwits_free.bull_scored_messages .latest,
'bull_scored_messages ')
pipe.add(stocktwits_free.bull_bear_msg_ratio .latest,
'bull_bear_msg_ratio ')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
Now you can take that and begin to use it as a building block for your algorithms, for more examples on how to do that you can visit our data pipeline factor library