It has been widely reported that companies with women in senior management and on the board of directors perform better than companies without. Credit Suisse’s Gender 3000 report looks at gender diversity in 3000 companies across 40 countries. According to this report, at the end of 2013, women accounted for 12.9% of top management (CEOs and directors reporting to the CEO) and 12.7% of boards had gender diversity. Additionally, “Companies with more than one woman on the board have returned a compound 3.7% a year over those that have none since 2005.”
These kind of reports quickly lead to the question, “What would happen if you invested in companies with female CEOs?”
The data backing this research was provided by Catalyst’s (http://www.catalyst.org/) Bottom Line Research Project (http://www.catalyst.org/knowledge/bottom-line-0).
#Import the libraries needed for the analysis.
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import matplotlib.pyplot as pyplot
import pytz
from pytz import timezone
from zipline import TradingAlgorithm
from zipline.api import (order_target_percent, record, symbol, history, add_history, get_datetime,
get_open_orders, get_order, order_target_value, order, order_target, sid)
from zipline.finance.slippage import FixedSlippage
#Import my csv and rename some of the columns
CEOs = local_csv('FemaleCEOs_v6.csv')
CEOs.rename(columns={'SID':'Ticker', 'Start Date':'start_date', 'End Date':'end_date'}, inplace=True)
#Below you see some basic information and the first 10 rows of this dataframe.
print "Number of CEOs = %s" % len(CEOs)
print "Number of Companies = %s" % CEOs['Ticker'].nunique()
CEOs[0:20]
CEOs['year_started'] = pd.DatetimeIndex(CEOs['start_date']).year
CEOs['year_ended'] = pd.DatetimeIndex(CEOs['end_date']).year
CEOs['year_started'].value_counts(sort=False).plot(kind='bar')
pyplot.grid(b=None, which='major', axis='both')
pyplot.box(on=None)
# First I need to convert the date values in the csv to datetime objects in UTC timezone.
CEOs['start_date'] = CEOs['start_date'].apply(lambda row: pd.to_datetime(str(row), utc=True))
CEOs['end_date'] = CEOs['end_date'].apply(lambda row: pd.to_datetime(str(row), utc=True))
# Then I want to check if any of the dates are weekends.
# If they are a weekend, I move them to the following Monday.
def check_date(row):
week_day = row.isoweekday()
if week_day == 6:
row = row + timedelta(days=2)
elif week_day == 7:
row = row + timedelta(days=1)
return row
CEOs['start_date'] = CEOs['start_date'].apply(check_date)
CEOs['end_date'] = CEOs['end_date'].apply(check_date)
# We need to deal with the dates that are outside of our pricing data range
# For people that started prior to 01/02/2002, I have changes their start date to 01/02/2002
# I also changed any future dated end dates to 12/1/2014, just to be safe.
def change_date(row):
start_date = row['start_date']
end_date = row['end_date']
if start_date < pd.to_datetime("2002-01-02", utc=True):
row['start_date'] = pd.to_datetime("2002-01-02", utc=True)
elif end_date > pd.to_datetime("2015-01-01", utc=True):
row['end_date'] = pd.to_datetime("2014-12-01", utc=True)
return row
CEOs = CEOs.apply(change_date, axis=1)
# I then add a new row called SID, which is the Security Identifier.
# Since ticker symboles are not unique across all time, the SID ensures we have the right company.
# I use the ticker and the start date to search for the security object
def get_SID(row):
temp_ticker = row['Ticker']
start_date = row['start_date'].tz_localize('UTC')
row['SID'] = symbols(temp_ticker, start_date)
return row
CEOs = CEOs.apply(get_SID, axis=1)
CEOs.sort(columns='start_date')
# I set the start and end date I want my algo to run for
start_algo = '2002-01-01'
end_algo = '2014-12-31'
# I make a series out of just the SIDs.
SIDs = CEOs.SID
# Then call get_pricing on the series of SIDs and store the results in a new dataframe called prices.
data = get_pricing(
SIDs,
start_date= start_algo,
end_date= end_algo,
fields ='close_price',
handle_missing='ignore'
)
The algo as written below, buys when the CEO comes into the postion and sells when she leave. It rebalances based on the number of stocks in my portfolio. When I own one stock, it will be 100% of my portfolio. When I own two stocks, they will each be 50% of my portfolio. As the number of stocks in my portfolio changes, the target weight of each stock should change too.
A future change would be to reinvest this more daily, so that the dividends are taken into consideration when they applied, and not the next time I buy or sell a stock. In this way, the algo is actually under performing where it should be.
from pandas.tseries.offsets import YearBegin
CEOs['year_ended'] = pd.DatetimeIndex(CEOs['end_date']).year
CEOs['year_started'] = pd.DatetimeIndex(CEOs['start_date']).year
counts = pd.Series(index=pd.date_range('2002-01-01', '2015-01-01', freq=YearBegin(1)))
for year in counts.index:
counts[year] = len(CEOs[(CEOs.start_date <= year) & (CEOs.end_date >= year)])
counts.plot(kind = 'bar')
pyplot.grid(b=None, which='major', axis='both')
pyplot.box(on=None)
The algo as written below, buys when the CEO comes into the postion and sells when she leave. It rebalances based on the number of stocks in my portfolio. When I own one stock, it will be 100% of my portfolio. When I own two stocks, they will each be 50% of my portfolio. As the number of stocks in my portfolio changes, the target weight of each stock should change too.
A future change would be to reinvest this monthly, so that the dividends are taken into consideration when they applied, and not the next time I buy or sell a stock. In this way, the algo is actually under performing where it should be.
"""
This is where I initialize my algorithm
"""
from zipline.api import order
from zipline.finance.slippage import FixedSlippage
def initialize(context):
#load the CEO data and a variable to count the number of stocks held at any time as global variables
context.CEOs = CEOs
context.current_stocks = []
context.stocks_to_order_today = []
context.stocks_to_sell_today = []
context.set_slippage(FixedSlippage(spread=0))
"""
Handle data is the function that is running every minute (or day) looking to make trades
"""
from zipline.api import order
def handle_data(context, data):
#: Set my order and sell dictionaries to empty at the start of any day.
context.stocks_to_order_today = []
context.stocks_to_sell_today = []
# Get todays date.
today = get_datetime()
# Get a dataframe with just the companies where start_date (or end date) is today.
context.stocks_to_order_today = context.CEOs.SID[context.CEOs.start_date==today].tolist()
context.stocks_to_sell_today= context.CEOs.SID[context.CEOs.end_date==today].tolist()
context.stocks_to_sell_today = [s for s in context.stocks_to_sell_today if s!= None]
context.stocks_to_order_today = [s for s in context.stocks_to_order_today if s!= None]
# If there are stocks that need to be bought or sold today
if len(context.stocks_to_order_today) > 0 or len(context.stocks_to_sell_today) > 0:
# First sell any that need to be sold, and remove them from current_stocks.
for stock in context.stocks_to_sell_today:
if stock in data:
if stock in context.current_stocks:
order_target(stock,0)
context.current_stocks.remove(stock)
#print "Selling %s" % stock
# Then add any I am buying to current_stocks.
for stock in context.stocks_to_order_today:
context.current_stocks.append(stock)
# Then rebalance the portfolio so I have and equal amount of each stock in current_stocks.
for stock in context.current_stocks:
if stock in data:
#print "Buying and/or rebalancing %s at target weight %s" % (stock, target_weight)
#calculate the value to buy
portfolio_value = context.portfolio.portfolio_value
num_stocks = len(context.current_stocks)
value_to_buy = portfolio_value/num_stocks
#print "Buying and/or rebalancing %s at value = %s" % (stock, value_to_buy)
order_target_value(stock,value_to_buy)
"""
This cell will create an extremely simple handle_data that will keep 100%
of our portfolio into the SPY and I'll plot against the algorithm defined above.
"""
# I set the start and end date I want my algo to run for
start_algo = '2002-01-01'
end_algo = '2014-12-31'
# I make a series out of just the SIDs.
SIDs = CEOs.SID
# Then call get_pricing on the series of SIDs and store the results in a new dataframe called prices.
data = get_pricing(
SIDs,
start_date= start_algo,
end_date= end_algo,
fields ='close_price',
handle_missing='ignore'
)
#: Here I'm defining the algo that I have above so I can run with a new graphing method
my_algo = TradingAlgorithm(
initialize=initialize,
handle_data=handle_data
)
#: Create a figure to plot on the same graph
fig = pyplot.figure()
ax1 = fig.add_subplot(211)
#: Create our plotting algorithm
def my_algo_analyze(context, perf):
perf.portfolio_value.plot(ax = ax1, label="Fortune 1000 Women-Led Companies")
#: Insert our analyze methods
my_algo._analyze = my_algo_analyze
# Run algorithms
returns = my_algo.run(data)
#: Plot the graph
ax1.set_ylabel('portfolio value in $', fontsize=20)
ax1.set_title("Cumulative Return", fontsize=20)
ax1.legend(loc='best')
fig.tight_layout()
pyplot.show()
To get a benchmark, I'm using a function, get_backtest, which pulls all of the results of a backtest in from the Quantopian IDE. In this case, my algorithm does nothing, other than set a benchmark. This allows me to get a benchmark where all the work has already been done to optimize the benchmark.
benchmark_bt = get_backtest('54ef94a65457f30f0b4db137')
I plot the cumulative returns of this benchmark against those of my algo to see how the relative performance is.
#: Create a figure to plot on the same graph
fig = pyplot.figure(figsize=(20,22))
ax1 = fig.add_subplot(211)
#: Plot the graph
cum_returns = pd.Series(my_algo.perf_tracker.cumulative_risk_metrics.algorithm_cumulative_returns[:len(benchmark_bt.risk.index)], index=benchmark_bt.risk.index)
benchmark_bt.risk.benchmark_period_return.plot(ax=ax1)
cum_returns.plot(ax=ax1)
ax1.set_ylabel('% Cumulative Return', fontsize=20)
ax1.set_title("Cumulative Return", fontsize=20)
ax1.legend(["SPY", "Fortune 1000 Women-Led Companies"], loc='best')
fig.tight_layout()
pyplot.show()
benchmark_bt.risk.benchmark_period_return.iloc[-1]
bench_tot_return = benchmark_bt.risk.benchmark_period_return.iloc[-1]
algo_tot_return = my_algo.perf_tracker.cumulative_risk_metrics.algorithm_cumulative_returns[-1]
bench_pct_ret = bench_tot_return * 100
algo_pct_ret = algo_tot_return * 100
bench_algo_diff = (algo_tot_return - bench_tot_return) * 100
print "Algo Percent Returns %s" % algo_pct_ret
print "Benchmark Percent Returns %s" % bench_pct_ret
print "Difference %s" % bench_algo_diff
# I verify that my leverage is still in line.
returns.gross_leverage.plot()
# I also take a look at the Sharpe Ratio. Sharpe helps understand the volitility of a strategy.
# Higher is better, and because my strategy looks more volitile than the S&P, it's worth considering.
# This is the sharpe of my algo
pct_change = returns['portfolio_value'].pct_change()
sharpe = (pct_change.mean()*252)/(pct_change.std() * np.sqrt(252))
sharpe
# This is the sharpe of the benchmark
bench_pct_change = benchmark_bt.risk.benchmark_period_return.pct_change()
bench_sharpe = (bench_pct_change.mean()*252)/(bench_pct_change.std() * np.sqrt(252))
bench_sharpe
This sharpe ratio isn't exceptional or anything, but it's good enough to be considered for the Quantopian Managers program (https://www.quantopian.com/managers,) and also beats out the S&P, so I am happy.
A couple of people have asked, "What if you remove Yahoo and Alibaba? Is this all due to the incredible performance there?
It's pretty easy to test that out.
security = 14848 #found this by hand 3647, 660, 14848, 8354,
adm_df = CEOs[(CEOs['SID'] == security)]
fig = pyplot.figure()
ax2 = fig.add_subplot(212)
start_date = adm_df['start_date']
end_date = adm_df['end_date']
data[security].plot(ax=ax2, figsize=(15, 18), color='g')
ax2.plot(start_date, data.ix[start_date][security], '^', markersize=20, color='b', linestyle='')
ax2.plot(end_date, data.ix[end_date][security], 'v', markersize=20, color='b', linestyle='')
pyplot.ylabel('% Cumulative Return', fontsize=20)
pyplot.title("Cumulative Return", fontsize=20)
pyplot.grid(b=None, which='major', axis='both')
pyplot.box(on=None)
pyplot.legend(['Yahoo'], frameon=False, loc='best')
print adm_df['CEO']
#Remove Yahoo
CEOs_yhoo = CEOs[(CEOs['Ticker'] != ('YHOO'))]
"""
This cell will create an extremely simple handle_data that will keep 100%
of our portfolio into the SPY and I'll plot against the algorithm defined above.
"""
# I set the start and end date I want my algo to run for
start_algo = '2002-01-01'
end_algo = '2014-12-31'
# I make a series out of just the SIDs.
SIDs = CEOs_yhoo.SID
# Then call get_pricing on the series of SIDs and store the results in a new dataframe called prices.
data = get_pricing(
SIDs,
start_date= start_algo,
end_date= end_algo,
fields ='close_price',
handle_missing='ignore'
)
#: Here I'm defining the algo that I have above so I can run with a new graphing method
my_algo_yhoo = TradingAlgorithm(
initialize=initialize,
handle_data=handle_data
)
#: Insert our analyze methods
my_algo_yhoo._analyze = my_algo_analyze
# Run algorithms
returns_yhoo = my_algo_yhoo.run(data)
pyplot.figure(figsize=[16,10])
benchmark_bt.risk.benchmark_period_return.plot()
returns_yhoo.algorithm_period_return.plot()
pyplot.ylabel('% Cumulative Return', fontsize=20)
pyplot.title("Cumulative Return", fontsize=20)
pyplot.grid(b=None, which='major', axis='both')
pyplot.box(on=None)
pyplot.legend(['SPY', 'Fortune 1000 Female CEOs'], frameon=False, loc='best')
bench_tot_return = benchmark_bt.risk.benchmark_period_return[-1]
algo_tot_return = my_algo_yhoo.perf_tracker.cumulative_risk_metrics.algorithm_cumulative_returns[-1]
bench_pct_ret = bench_tot_return * 100
algo_pct_ret = algo_tot_return * 100
bench_algo_diff = (algo_tot_return - bench_tot_return) * 100
print "Algo Percent Returns %s" % algo_pct_ret
print "Benchmark Percent Returns %s" % bench_pct_ret
print "Difference %s" % bench_algo_diff
Someone else asked me to remove the top and bottom outliers. Here I remove the top 3 and the bottom 3.
# Remove the top 3
CEOs_outliers = CEOs[(CEOs['Ticker'] != ('HSNI'))]
CEOs_outliers = CEOs_outliers[(CEOs_outliers['Ticker'] != ('VTR'))]
CEOs_outliers = CEOs_outliers[(CEOs_outliers['Ticker'] != ('TJX'))]
# Remove the bottom 3
CEOs_outliers = CEOs_outliers[(CEOs_outliers['Ticker'] != ('NYT'))]
CEOs_outliers = CEOs_outliers[(CEOs_outliers['Ticker'] != ('RAD'))]
CEOs_outliers = CEOs_outliers[(CEOs_outliers['Ticker'] != ('Q'))]
"""
This cell will create an extremely simple handle_data that will keep 100%
of our portfolio into the SPY and I'll plot against the algorithm defined above.
"""
# I set the start and end date I want my algo to run for
start_algo = '2002-01-01'
end_algo = '2014-12-31'
# I make a series out of just the SIDs.
SIDs = CEOs_outliers.SID
# Then call get_pricing on the series of SIDs and store the results in a new dataframe called prices.
data = get_pricing(
SIDs,
start_date= start_algo,
end_date= end_algo,
fields ='close_price',
handle_missing='ignore'
)
#: Here I'm defining the algo that I have above so I can run with a new graphing method
my_algo_outliers = TradingAlgorithm(
initialize=initialize,
handle_data=handle_data
)
#: Insert our analyze methods
my_algo_outliers._analyze = my_algo_analyze
# Run algorithms
returns_outliers = my_algo_outliers.run(data)
pyplot.figure(figsize=[16,10])
benchmark_bt.risk.benchmark_period_return.plot()
returns_outliers.algorithm_period_return.plot()
pyplot.ylabel('% Cumulative Return', fontsize=20)
pyplot.title("Cumulative Return", fontsize=20)
pyplot.grid(b=None, which='major', axis='both')
pyplot.box(on=None)
pyplot.legend(['SPY', 'Fortune 1000 Female CEOs'], frameon=False, loc='best')
bench_tot_return = benchmark_bt.risk.benchmark_period_return[-1]
algo_tot_return = my_algo_outliers.perf_tracker.cumulative_risk_metrics.algorithm_cumulative_returns[-1]
bench_pct_ret = bench_tot_return * 100
algo_pct_ret = algo_tot_return * 100
bench_algo_diff = (algo_tot_return - bench_tot_return) * 100
print "Algo Percent Returns %s" % algo_pct_ret
print "Benchmark Percent Returns %s" % bench_pct_ret
print "Difference %s" % bench_algo_diff
sectors = local_csv('CEOs_sector_output_v2.csv')
sector_count = sectors['sector'].value_counts(sort=False)
sector_count.plot(kind='bar')
pyplot.grid(b=None, which='major', axis='both')
pyplot.box(on=None)
It does look like I have a slight bias towards consumer cyclical companies. These include companies such as, GM, eBay, The New York Times and Ann Taylor Stores.
The next question might be to ask, "Is my sector weighting responsible for the performance?" Using XLY, a consumer discretionary ETF, we can get a comparison of how consumer companies are doing against the S&P500 for the same time period.
consumer = get_pricing(['XLY','SPY'],
start_date = '2002-01-02',
end_date = '2015-02-01',
fields = 'close_price')
def cum_returns(df):
return (1 + df).cumprod() - 1
cum_returns(consumer.pct_change()).plot()
This sector neutral version of the algo, attempts to remove the bais towards consumer companies that the original algo has. It does this by first determining the number of sectors that the portfolio holds each time it is rebalanced, and then dividing the portfolio value by the number of sectors. It then determines the number of companies per sector, and divides the portfolio value per sector by the number of companies in the give sector.
This ensures that all sectors are invested in equally.
sectors_data = local_csv('CEOs_sector_output_v2.csv')
def get_sec_SID(row):
temp_sid = row['SID']
row['SID'] = symbols(temp_sid)
return row
sectors_data = sectors_data.apply(get_sec_SID, axis=1)
CEOs = pd.merge(CEOs, sectors_data, how='left')
"""
This is where I initialize my algorithm
"""
from zipline.api import order
from zipline.finance.slippage import FixedSlippage
def initialize(context):
#load the CEO data and a variable to count the number of stocks held at any time as global variables
context.CEOs = CEOs
context.current_stocks = []
context.stocks_to_order_today = []
context.stocks_to_sell_today = []
context.set_slippage(FixedSlippage(spread=0))
context.num_sectors = 0
"""
Handle data is the function that is running every minute (or day) looking to make trades
"""
from zipline.api import order
def handle_data(context, data):
#: Set my order and sell dictionaries to empty at the start of any day.
context.stocks_to_order_today = []
context.stocks_to_sell_today = []
current_CEOs = context.CEOs
# Get todays date.
today = get_datetime()
# Get a dataframe with just the companies where start_date (or end date) is today.
context.stocks_to_order_today = context.CEOs.SID[context.CEOs.start_date==today].tolist()
context.stocks_to_sell_today= context.CEOs.SID[context.CEOs.end_date==today].tolist()
context.stocks_to_sell_today = [s for s in context.stocks_to_sell_today if s!= None]
context.stocks_to_order_today = [s for s in context.stocks_to_order_today if s!= None]
# If there are stocks that need to be bought or sold today
if (len(context.stocks_to_order_today) > 0) or (len(context.stocks_to_sell_today) > 0):
# First sell any that need to be sold, and remove them from current_stocks.
for stock in context.stocks_to_sell_today:
if stock in data:
if stock in context.current_stocks:
order_target(stock,0)
context.current_stocks.remove(stock)
#print "Selling %s" % stock
# Then add any I am buying to current_stocks.
for stock in context.stocks_to_order_today:
context.current_stocks.append(stock)
#get the list of current CEOs so that we can find the sector information
current_CEOs = context.CEOs[context.CEOs.SID.isin(context.current_stocks)]
#count the number of sectors
context.num_sectors = current_CEOs.sector_id.nunique()
#calculate the value to buy
#get the current portfolio value
portfolio_value = context.portfolio.portfolio_value
#get the value to be invested in each sector
value_per_sector = portfolio_value/context.num_sectors
#series of sectors and the number of companies in the sector
sector_count = current_CEOs['SID'].groupby(current_CEOs['sector_id']).count()
# Then rebalance the portfolio so I have and equal amount of each stock in current_stocks.
for stock in context.current_stocks:
if stock in data:
#get the sector of the current company
current_company_sector = context.CEOs.sector_id[(context.CEOs.SID == stock)].iloc[0]
#get the number of companies in that sector
num_companies_in_sector = sector_count.loc[current_company_sector]
#calculate the amount to invest in this company
value_to_buy = value_per_sector/num_companies_in_sector
#place order that amount of this stock
order_target_value(stock,value_to_buy)
"""
This cell gets the historical pricing data for all the SIDs in my universe.
Then kicks off my algo using that data.
"""
# I set the start and end date I want my algo to run for
start_algo = '2002-01-01'
end_algo = '2014-12-30'
# I make a series out of just the SIDs.
SIDs = CEOs.SID
# Then call get_pricing on the series of SIDs and store the results in a new dataframe called prices.
data = get_pricing(
SIDs,
start_date= start_algo,
end_date= end_algo,
fields ='close_price',
handle_missing='ignore'
)
#: Here I'm defining the algo that I have above so I can run with a new graphing method
my_algo_sectors = TradingAlgorithm(
initialize=initialize,
handle_data=handle_data
)
# Run algorithms
returns_sectors = my_algo_sectors.run(data)
pyplot.figure(figsize=[16,10])
benchmark_bt.risk.benchmark_period_return.plot()
returns_sectors.algorithm_period_return.plot()
pyplot.ylabel('% Cumulative Return', fontsize=20)
pyplot.title("Cumulative Return", fontsize=20)
pyplot.grid(b=None, which='major', axis='both')
pyplot.box(on=None)
pyplot.legend(['SPY', 'Fortune 1000 Female CEOs'], frameon=False, loc='best')
bench_tot_return = benchmark_bt.risk.benchmark_period_return.iloc[-1]
algo_tot_return = my_algo_sectors.perf_tracker.cumulative_risk_metrics.algorithm_cumulative_returns[-1]
bench_pct_ret = bench_tot_return * 100
algo_pct_ret = algo_tot_return * 100
bench_algo_diff = (algo_tot_return - bench_tot_return) * 100
print "Algo Percent Returns %s" % algo_pct_ret
print "Benchmark Percent Returns %s" % bench_pct_ret
print "Difference %s" % bench_algo_diff
There are at least two existing funds with a gender focus and I've been told there are as many as 17 gender focused investment products.
The Pax Global Women’s Leadership Index (PXWIX) is the first broad-market index of the highest-rated companies in the world in advancing women’s leadership
The Women In Leadership index (WIL) tracks a weighted index of 85 U.S.-based companies that are listed on the NYSE or NASDAQ, have market capitalizations of at least $250 million, and have a woman CEO or a board of directors that’s at least 25% female.
Here is a look at them, plotted against the SPY. We can use these as a decent reference.
funds = local_csv("Womens_Funds.csv", date_column='Date')
funds = funds.sort_index(ascending=True)
funds['SPY'] = get_pricing('SPY', start_date='2002-01-02', end_date='2015-02-19', fields='close_price')
def cum_returns(df):
return (1 + df).cumprod() - 1
cum_returns(funds.pct_change()).plot()