Quantopian's community platform is shutting down. Please read this post for more information and download your code.
Back to Community
Post-Earnings Drift Trading Strategy with Estimize (PEAD)

7/15/2016 Update:

Access to the Estimize dataset will temporarily be shut down starting July 18th, 2016.

We've identified an issue with the manner in which we were processing  the Estimize dataset that prevented updates to the data starting June, 2016. All subscribers have been notified and we are taking steps to implement a solution. 

For an alternative version using Wall Street Consensus Estimates, please view this thread: https://www.quantopian.com/posts/updated-long-slash-short-earnings-sentiment-trading-strategy-with-the-streets-consesus

The backtest here has been replaced with a version using the Wall Street consensus until an appropriate solution has been implemented.  

This is a simple post-earnings announcement drift (PEAD) trading strategy that attempts to profit off the difference between reported earnings and earnings estimates. Earnings estimates (earnings per share or EPS) are heavily used in both quant and fundamental stock analysis as forward-looking indicators of stock performance, and when a discrepancy occurs between estimates and actually reported earnings, also known as an earnings surprise, stocks tend to drift in either a positive or negative direction (post-earnings announcement drift).

In this strategy, I simply follow the direction of that surprise and hold long/short positions for the following three business days. However, unlike in traditional PEAD strategies I use Crowdsourced earnings estimates rather than the Wall Street analyst average. This is because Crowdsourced earnings estimates can be more accurate than the Street’s average 65% of the time as discussed in this whitepaper.

While we aren’t yet able to test the Street’s version, check out the strategy below using Estimize’s Crowdsourced Earnings Estimates and let me know what you think!

Strategy Notes

  • Data set: The full dataset used is Estimize's Consensus Estimates and EventVestor's Earnings Calendar dataset.
  • Weights: The weight for each security is determined by the total number of longs and shorts we have in that current day. So if we have 2 longs and 2 shorts, the weight for each long will be 50% (1.0/number of securities) and the weight for each short will be -50%. This is a rolling rebalance at the beginning of each day according to the number of securities currently held and to order.
  • Hedging: [OPTIONAL] You have the ability to turn on net dollar exposure hedging with the SPY
  • Days held: Positions are currently held for 3 days but are easily changeable by modifying 'context.days_to_hold'
  • Percent threshold: Only surprises between 0% and 4% in absolute magnitude will be considered as a trading signal. These are adjustable using the minimum and maximum threshold variables in context.
  • Earnings dates: All trades are made 1 business day AFTER an earnings announcement regardless of whether it was a Before Market Open or After Market announcement

Webinar: Learn the advantages of Crowdsourced estimates and how it can help your trading strategies with Vinesh Jha, CEO of Extract Alpha and former executive director at PDT Partners, through this recording.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

34 responses

long only version

Here is the tear sheet of Seong's backtest. The rolling beta is pretty good. I'm not thrilled with the monthly returns though - they are very uneven.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

For those interested in learning more about crowdsourced estimates and Estimize, join us for our webinar on March 1st, 2016 @ 6PM EST.

We'll be joined by Vinesh Jha, CEO of ExtractAlpha, and Leigh Drogen, CEO of Estimize.

Register through this link

Thanks to all who joined the webinar! We had some great questions asked.

You can watch a recording of the webinar here. (https://www.youtube.com/watch?v=lvM5xs4uKIg&feature=youtu.be)

Seong,

Good presentation the other night on the webinar. I'm new to Quantopian and algo trading in general but have been working my way through the site and learning what I can. I've been playing around with the Estimize data and your sample algo for a couple of weeks and had a couple of questions.

After doing some backtesting and looking at the logs, it doesn't appear the 'num_estimates' factor in the screen of your sample is working. It looks like any stock with even a single Estimize estimate is being accepted (assuming it meets the other screen criteria). Am I overlooking something or is that not functioning properly?

I'm also wanting to work with the EventVestor Earning Calendar data to pull in the 'calendar_time' info so I can tell whether an earnings report was before market open or after close, but I'm confused as to how to access that info. It appears that typically with Pipeline you call the dataset such as (for Estimize):

from quantopian.pipeline.data.estimize import consensus_estimize_eps_free as estimize  

and from there you can set your set your factors according to the table headers found in the Notebook info, like:

estimize_eps = estimize.estimize_eps_final  

But with the EventVestor data, it appears you have to call the factors at import:

from quantopian.pipeline.data.eventvestor import EarningsCalendar  
from quantopian.pipeline.data.eventvestor.factors import (  
    BusinessDaysUntilNextEarnings,  
    BusinessDaysSincePreviousEarnings  
)

If so, how do you know what the factors are called in the dataset (such as "BusinessDaysUntilNextEarnings")? If I wanted to call in 'calendar_time' what would I use and how would I find other information, such as the 'asof_date'?

Apologies if any of this is staring me in the face and I'm missing it.

Scott

Hi Scott,

These are really great questions!

For the first one about number of estimates, you're right, it looks like it's not being taken into account. I've filed an internal issue on why that filtering isn't working and a backtest attached with that filter removed.

As for the Earnings Calendar, you can currently access the day that the earnings announcement happens with something like:

    # EarningsCalendar.X is the actual date of the announcement  
    # E.g. 9/12/2015  
    pipe.add(EarningsCalendar.next_announcement.latest, 'next')  
    pipe.add(EarningsCalendar.previous_announcement.latest, 'prev')  

While the algorithm trades 1 full business day after the announcement (whether or not it's before/after market), we're working on adding a way to indicate Before/After market but I do not have a specific timeline for you.

Let me know if any of that is confusing or if you have other questions,

Seong

Thanks, Seong. I didn't realize the Before/After data hadn't been implemented yet. I'll keep an eye out for that and an update on the number of estimates filter.

One other question; how much backtesting data is taken into account for the contest? If we find something interesting using a dataset that doesn't have a lot of history (such as Estimize or Sentdex Sentiment Analysis), how would that affect the backtesting score?

Scott

Hi Scott, as long as the data set can run a 2 year backtest that ends on the date of submission, you should be ok for the contest. Each of those datasets have enough data for that.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

To all those following this thread, we've made a change to the way you access the BusinessDays Factor for earnings announcements:

Before it was:

from quantopian.pipeline.data.eventvestor.factors import (BusinessDaysUntilNextEarnings, BusinessDaysSincePreviousEarnings)

This has changed to:

from quantopian.pipeline.factors.eventvestor import (BusinessDaysUntilNextEarnings, BusinessDaysSincePreviousEarnings)

Please see the original post for the full explanation.

Leverage capped at one and run on a shorter timeframe.

On the webinar, Leigh mentioned the idea of using the direction of earnings as a momentum signal for medium - long-term trades.

Just throwing out ideas - As a pipeline factor, you could compute the average percent change in the revisions leading up to an earnings announcement and use that as an alpha factor.

IB has a commission of $1 per trade right? I added set_commission(commission.PerTrade(cost=1.00)) and ran a few backtests.
The algo works great with 50k+ but below 50k the algo starts to lose returns to the point where it's negative with 10k.
Any suggestions on how to leverage this algo with 10k?

Bruno,

One idea is to keep it long-only and use Robinhood. Peter Bakker posted a long-only version of the algorithm towards the beginning of this thread. That may work with Robinhood's zero commission model.

The hard part about that would be waiting T+3 days for cash settlement unless you had Robinhood Instant.

Recently, we realized that the default pipeline dataset did not consistently provide the current quarter's reported earnings. This meant that for certain securities, you weren't absolutely guaranteed the proper quarter's data when making the pipeline call. While we mainly saw this problem with the wall_street based namespaces, the issue has occurred for the estimize namespaces as well. To fix this problem, we've created four new pipeline factors that will ensure you get next or last quarter's data.

You can find these namespaces through the following import statement: from quantopian.pipeline.data.estimize import (ConsensusEstimizeEPS, ConsensusWallstreetEPS, ConsensusEstimizeRevenue, ConsensusWallstreetRevenue)

The original algorithm has been updated to use these new factors. The algorithm has also been updated to better handle slippage & commissions as well as featuring Quantopian 2 improvements.

When I try to run Peter's long-only version, I get the following error:

runtime exception; import error, no module name factors

for the following line of code:
from quantopian.pipeline.data.eventvestor.factors import (
BusinessDaysUntilNextEarnings,
BusinessDaysSincePreviousEarnings
)

Hi Adam,
Subsequent to Peter's post, we did some refactoring of the namespace for the factors. Instead, it should be:

from quantopian.pipeline.factors.eventvestor import (  
    BusinessDaysUntilNextEarnings,  
    BusinessDaysSincePreviousEarnings  
)

What's different with your algorithm and the main one here? Would you mind posting here so others could benefit from the troubleshooting

Mine uses Accern data also. What I'm wanting to do is specify in handle_data that if I have not held the stock overnight/1 day then don't try to sell it. So I tried added can_trade = context.stocks_held.get(security) >= 1 and then have del context.stocks_held[security] but that seems to cause problems.

Anyway I can make sure not to sell if a security hits a limit or stop unless I've had the stock for atleast a day and not confuse the schedule that will sell the stock after 5 days?

I seem to get better results using the Accern WeightedSentimentByVolatility at the last 2 days. I'll have to do some tests on impact score also since the Event Calendar data has been corrected.

Update
So my logic was wrong on this before. This was set to can_trade = context.stocks_held.get(security) <= 1, copy/paste issue.
I'm running more tests now but is there anyway to speed up the back-testing. It's so slow once I add in Accern. Am I doing something wrong that is causing this?

So I set can_trade to how it shows in the update above and got the issue again. I also have del context.stocks_held[security] uncommented in the handle_data section. So seems it's related to that.

There was a runtime error.
KeyError: Equity(7254, symbol=u'SWY', asset_name=u'SAFEWAY INC', exchange=u'NEW YORK STOCK EXCHANGE', start_date=Timestamp('1993-01-04 00:00:00+0000', tz='UTC'), end_date=Timestamp('2015-01-29 00:00:00+0000', tz='UTC'), first_traded=None, auto_close_date=Timestamp('2015-02-03 00:00:00+0000', tz='UTC'))
... USER ALGORITHM:173, in order_positionsGo to IDE
context.stocks_held[security] += 1

Here is my latest version, better returns and added PvR to it. However PvR code is showing shorts happening and negative cash as well as leverage above 1. I'm not sure why this is happening. Any help would be greatly appreciated.

Steven,

Thanks for sharing. It's really interesting how you combined Accern's new sentiment data along with Estimize's earnings estimates. I want to help you troubleshoot - but from cloning your algorithm, it looks like you aren't experiencing KeyErrors. Is this still a problem for you? Or is the main concern now the short positions that PVR is holding?

As for performance, I think there are a few ways to improve this but would like to focus on the first problem at hand.

I'd like to make sure it doesn't short, go negative cash and not above 0.95 leverage first but I would love to see how to make this better as well and look forward to see what PvR you and others are able to get.

Also I think I'm not getting the error anymore because I commented out the del function after I sell based on limit or stop in the handle data area.

Steven,

Here are a few improvements that I've found. It seems that in initialize, you could combine all the screens in the mask for top_sentiment like so:

def make_pipeline(context):  
    # Create our pipeline  
    pipe = Pipeline()  
    # Instantiating our factors  
    factor = PercentSurprise()  
    weighted_sentiment = WeightedSentimentByVolatility()  
    # Screen out penny stocks and low liquidity securities.  
    dollar_volume = AverageDollarVolume(window_length=20)  
    is_liquid = dollar_volume > 10**7

    # Filter down to stocks in the top/bottom  
    longs = (factor >= context.min_surprise) & (factor <= context.max_surprise)

    # Add long/shorts to the pipeline  
    pipe.add(longs, "longs")  
    pipe.add(BusinessDaysSincePreviousEarnings(), 'pe')  
    # Set our pipeline screens  
    # Filter down stocks using sentiment  
    base_universe = is_liquid & longs & (weighted_sentiment != 0)  
    top_sentiment = weighted_sentiment.percentile_between(85, 100, mask=(base_universe))  
    pipe.set_screen(top_sentiment)  
    return pipe  

Also in order_positions, it looks like the exit logic I had used before wasn't performing as necessary. This seems closer to what you'd want to use:

    # Check if we've exited our positions and if we haven't, exit the remaining securities  
    # that we have left  
    for security in port:  
        if data.can_trade(security):  
            if context.stocks_held.get(security) is not None:  
                context.stocks_held[security] += 1  
                if context.stocks_held[security] >= context.days_to_hold:  
                    order_target_percent(security, 0)  
                    del context.stocks_held[security]  
            # If we've deleted it but it still hasn't been exited. Try exiting again  
            else:  
                log.info("Haven't yet exited %s, ordering again" % security.symbol)  
                order_target_percent(security, 0)  

It also seems like running handle_data every minute is going to slow down your algorithm a lot and will lead to the short positions that you're seeing because right now, there isn't a check for open orders so you could be placing 2 subsequent orders for the same security because it meant your stop conditions. You can fix that by adding a quick if not get_open_orders(security). For speed improvements, I'd suggest moving this to a schedule_function method.

Here's the full example:

"""
This is a PEAD strategy based off Estimize's earnings estimates. Estimize  
is a service that aggregate financial estimates from independent, buy-side,  
sell-side analysts as well as students and professors. You can run this  
algorithm yourself by geting the free sample version of Estimize's consensus  
dataset and EventVestor's Earnings Calendar Dataset at:

- https://www.quantopian.com/data/eventvestor/earnings_calendar  
- https://www.quantopian.com/data/estimize/revisions

Much of the variables are meant for you to be able to play around with them:  
1. context.days_to_hold: defines the number of days you want to hold before exiting a position  
2. context.min/max_surprise: defines the min/max % surprise you want before trading on a signal  
"""

import numpy as np

from quantopian.algorithm import attach_pipeline, pipeline_output  
from quantopian.pipeline import Pipeline  
from quantopian.pipeline.data.builtin import USEquityPricing  
from quantopian.pipeline.factors import CustomFactor, AverageDollarVolume  
from quantopian.pipeline.classifiers.morningstar import Sector  
from quantopian.pipeline.data.accern import alphaone as alphaone

from quantopian.pipeline.data.estimize import (  
    ConsensusEstimizeEPS,  
    ConsensusWallstreetEPS,  
    ConsensusEstimizeRevenue,  
    ConsensusWallstreetRevenue  
)

# The sample and full version is found through the same namespace  
# https://www.quantopian.com/data/eventvestor/earnings_calendar  
# Sample date ranges: 01 Jan 2007 - 10 Feb 2014  
from quantopian.pipeline.data.eventvestor import EarningsCalendar  
from quantopian.pipeline.factors.eventvestor import (  
    BusinessDaysUntilNextEarnings,  
    BusinessDaysSincePreviousEarnings  
)

# Create custom factor subclass to calculate a market cap based on yesterday's  
# close  
class PercentSurprise(CustomFactor):  
    window_length = 1  
    inputs = [ConsensusEstimizeEPS.previous_actual_value,  
              ConsensusEstimizeEPS.previous_mean]

    # Compute market cap value  
    def compute(self, today, assets, out, actual_eps, estimize_eps):  
        out[:] = (actual_eps[-1] - estimize_eps[-1])/(estimize_eps[-1] + 0)  
"""       
class DailySentimentByImpactScore(CustomFactor):  
    # Economic Hypothesis: Accern reports both an `impact score`  
    # and `article sentiment`. The `impact score` is used to measure  
    # the likelihood that a security's price changes by more than 1%  
    # in the following day. The `article sentiment` is a quantified daily  
    # measure of news & blog sentiment about a given security. This combined  
    # measure of `impact score` and `article sentiment` may hold information  
    # about price changes in the following day.  
    inputs = [alphaone.article_sentiment, alphaone.impact_score]  
    window_length = 1

    def compute(self, today, assets, out, sentiment, impact_score):  
        out[:] = sentiment * impact_score  
"""       
class WeightedSentimentByVolatility(CustomFactor):  
    # Economic Hypothesis: Sentiment volatility can be an indicator that  
    # public news is changing rapidly about a given security. So securities  
    # with a high level of sentiment volatility may indicate a change in  
    # momentum for that stock's price.  
    inputs = [alphaone.article_sentiment]  
    window_length = 2

    def compute(self, today, assets, out, sentiment):  
        out[:] = np.nanstd(sentiment, axis=0) * np.nanmean(sentiment, axis=0)  

def make_pipeline(context):  
    # Create our pipeline  
    pipe = Pipeline()  
    # Instantiating our factors  
    factor = PercentSurprise()  
    weighted_sentiment = WeightedSentimentByVolatility()  
    # Screen out penny stocks and low liquidity securities.  
    dollar_volume = AverageDollarVolume(window_length=20)  
    is_liquid = dollar_volume > 10**7

    # Filter down to stocks in the top/bottom  
    longs = (factor >= context.min_surprise) & (factor <= context.max_surprise)

    # Add long/shorts to the pipeline  
    pipe.add(longs, "longs")  
    pipe.add(BusinessDaysSincePreviousEarnings(), 'pe')  
    # Set our pipeline screens  
    # Filter down stocks using sentiment  
    base_universe = is_liquid & longs & (weighted_sentiment != 0)  
    top_sentiment = weighted_sentiment.percentile_between(85, 100, mask=(base_universe))  
    pipe.set_screen(top_sentiment)  
    return pipe  
def initialize(context):  
    #: Set commissions and slippage to 0 to determine pure alpha  
    set_commission(commission.PerShare(cost=0, min_trade_cost=0))  
    set_slippage(slippage.FixedSlippage(spread=0))  
    set_long_only()

    #: Declaring the days to hold, change this to what you want)))  
    context.days_to_hold = 5  
    #: Declares which stocks we currently held and how many days we've held them dict[stock:days_held]  
    context.stocks_held = {}  
    context.stocks_exited = {}

    #: Declares the minimum magnitude of percent surprise  
    context.min_surprise = .00  
    context.max_surprise = .04  
    # Make our pipeline  
    attach_pipeline(make_pipeline(context), 'estimize')

    # Log our positions at 10:00AM  
    schedule_function(func=log_positions,  
                      date_rule=date_rules.every_day(),  
                      time_rule=time_rules.market_close(minutes=30))  
    # Order our positions  
    schedule_function(func=order_positions,  
                      date_rule=date_rules.every_day(),  
                      time_rule=time_rules.market_open())  
    # Order our positions  
    schedule_function(func=extra_orders,  
                      date_rule=date_rules.every_day(),  
                      time_rule=time_rules.market_open(minutes=60))

def before_trading_start(context, data):  
    # Screen for securities that only have an earnings release  
    # 1 business day previous and separate out the earnings surprises into  
    # positive and negative  
    results = pipeline_output('estimize')  
    results = results[results['pe'] == 1]  
    assets_in_universe = results.index  
    context.positive_surprise = assets_in_universe

def log_positions(context, data):  
    #: Get all positions  
    if len(context.portfolio.positions) > 0:  
        all_positions = "Current positions for %s : " % (str(get_datetime()))  
        for pos in context.portfolio.positions:  
            if context.portfolio.positions[pos].amount != 0:  
                all_positions += "%s at %s shares, " % (pos.symbol, context.portfolio.positions[pos].amount)  
        log.info(all_positions)  
def order_positions(context, data):  
    """  
    Main ordering conditions to always order an equal percentage in each position  
    so it does a rolling rebalance by looking at the stocks to order today and the stocks  
    we currently hold in our portfolio.  
    """  
    port = context.portfolio.positions

    # Check if we've exited our positions and if we haven't, exit the remaining securities  
    # that we have left  
    for security in port:  
        if data.can_trade(security):  
            if context.stocks_held.get(security) is not None:  
                context.stocks_held[security] += 1  
                if context.stocks_held[security] >= context.days_to_hold:  
                    order_target_percent(security, 0)  
                    del context.stocks_held[security]  
            # If we've deleted it but it still hasn't been exited. Try exiting again  
            else:  
                log.info("Haven't yet exited %s, ordering again" % security.symbol)  
                order_target_percent(security, 0)  
    # Check our current positions  
    current_positive_pos = [pos for pos in port if (port[pos].amount > 0 and pos in context.stocks_held)]  
    positive_stocks = context.positive_surprise.tolist() + current_positive_pos  
    # Rebalance our positive surprise securities (existing + new)  
    for security in positive_stocks:  
        can_trade = context.stocks_held.get(security) <= context.days_to_hold or \  
                    context.stocks_held.get(security) is None  
        if data.can_trade(security) and can_trade:  
            order_target_percent(security, 0.95 / len(positive_stocks))  
            if context.stocks_held.get(security) is None:  
                context.stocks_held[security] = 0  
def extra_orders(context, data):  
    for security in context.portfolio.positions:  
        can_trade = context.stocks_held.get(security) >= 1  
        if data.can_trade(security) and can_trade and not get_open_orders(security):  
            current_position = context.portfolio.positions[security].amount  
            cost_basis = context.portfolio.positions[security].cost_basis  
            price = data.current(security, 'price')  
            limit = cost_basis*1.06  
            stop = cost_basis*0.92  
            if price >= limit and current_position > 0:  
                order_target_percent(security, 0)  
                log.info( str(security) + ' Sold for Profit')  
                del context.stocks_held[security]  
            if price <= stop and current_position > 0:  
                order_target_percent(security, 0)  
                log.info( str(security) + ' Sold for Loss')  
                del context.stocks_held[security]  

I'm not sure how it will affect results, but I think it'll provide a good starting point for you to improve on.

Steven,

Inspired by your idea, I've taken a stab at creating my own version using Accern's new sentiment to trade on earnings announcements. This simply uses the previous day's article sentiment to hold positions. Check it out here: https://www.quantopian.com/posts/long-slash-short-earnings-sentiment-trading-strategy

Very nice!! I'm still playing around with the code you provided. I'm going to clone your other strategy and play with that as well. Thanks for all the help on this and great work on the new strategy. I'll let you know when I get the one above working perfect or if I come across additional issues.

There seems to be issues (in about half the cases) where the data is not correctly synced to the appropriate date window. This happens when the Estimize data is missing an entry, then the algorithm uses the prior quarter. Is it possible to check that the "release_date" field is within ~2 days or so of the current day to avoid this issue?

Patrick,

This is a different version of this algorithm that uses news sentiment but it also includes the release_date field within 7 days of the earnings announcement. The factor I'm using is this DaysSinceRelease Custom Factor.

class DaysSinceRelease(CustomFactor):  
    # Only getting the previous quarter's estimize surprise  
    window_length = 1  
    inputs = [EarningsCalendar.previous_announcement,  
              ConsensusEstimizeEPS.previous_release_date]  
    def compute(self, today, assets, out,  
                earnings_announcement, estimize_release):  
        days = estimize_release - earnings_announcement  
        out[:] = abs(days.astype('timedelta64[D]').astype(int))  

Hi guys,

Any update on the "Before/After market" indicator? Currently there is no way to get Before/After market for earning reports in pipelines.

Regards,
-- Arash

Access to the Estimize dataset will temporarily be shut down starting today.

We've identified an issue with the Estimize dataset that prevented updates to the data starting June, 2016. All subscribers have been notified and we are taking steps to implement a solution.

For an alternative version using Wall Street Consensus Estimates, please view this thread: https://www.quantopian.com/posts/updated-long-slash-short-earnings-sentiment-trading-strategy-with-the-streets-consesus

Any updates on this dataset ? My only profitable strategy relies on this dataset.

Hey, I am trying to run the samples here and get an error for the Estimize imports. Are the libararies still supported, or were they discontinued in 2016? Thanks for the help.

Hi Shiva and William,

We are actively working on fixing this dataset. It's up on our staging environment and we are doing some testing now before pushing it to our production environment. I'll post here when we ship the fix.

Disclaimer

The material on this website is provided for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation or endorsement for any security or strategy, nor does it constitute an offer to provide investment advisory services by Quantopian. In addition, the material offers no opinion with respect to the suitability of any security or specific investment. No information contained herein should be regarded as a suggestion to engage in or refrain from any investment-related course of action as none of Quantopian nor any of its affiliates is undertaking to provide investment advice, act as an adviser to any plan or entity subject to the Employee Retirement Income Security Act of 1974, as amended, individual retirement account or individual retirement annuity, or give advice in a fiduciary capacity with respect to the materials presented herein. If you are an individual retirement or other investor, contact your financial advisor or other fiduciary unrelated to Quantopian about whether any given investment idea, strategy, product or service described herein may be appropriate for your circumstances. All investments involve risk, including loss of principal. Quantopian makes no guarantees as to the accuracy or completeness of the views expressed in the website. The views are subject to change, and may have become unreliable for various reasons, including changes in market conditions or economic circumstances.

Hi,

Wondering whether this has now been resolved?