Do films that pass the Bechdel Test make more money for their producers? I've replicated Walt Hickey's recent article in FiveThirtyEight to find out. My results confirm his own in part, but also find notable differences that point the need for clarification at a minimum. While I am far from the first to make this argument, this case is illustrative of a larger need for journalism and other data-driven enterprises to borrow from hard-won scientific practices of sharing data and code as well as supporting the review and revision of findings. This admittedly lengthy post is a critique of not only this particular case but also an attempt to work through what open data journalism could look like.
New data-driven journalists such as FiveThirtyEight have faced criticism from many quarters and the critiques, particularly around the naïveté of assuming credentialed experts can be bowled over by quantitative analysis so easily as the terrifyingly innumerate pundits who infest our political media [1,2,3,4]. While I find these critiques persuasive, I depart from them here to instead argue that I have found this "new" brand of data journalism disappointing foremost because it wants to perform science without abiding by scientific norms.
The questions of demarcating what is or is not science are fraught, so let's instead label my gripe a "failure to be open." By openness, I don't mean users commenting on articles or publishing whistleblowers' documents. I mean "openness" more in the sense of "open source software" where the code is made freely available to everyone to inspect, copy, modify, and redistribute. But the principles of open-source software trace their roots more directly back norms in the scientific community that Robert Merton identified and came to known as "CUDOS" norms. It's worth reviewing two of these norms because Punk-ass Data Journalism is very much on the lawn of Old Man Science and therein lie exciting possibilities for exciting adventures.
The first and last elements of Merton's "CUDOS" norms merit special attention for our discussion of openness. Communalism is the norm that scientific results are shared and become part of a commons that others can build upon --- this is the bit about "standing upon the shoulders of giants." Skepticism is the norm that claims must be subject to organized scrutiny by community --- which typically manifests as peer review. Both of these strongly motivated philosophies in the open source movement, and while they are practiced imperfectly in my experience within the social and information sciences (see my colleagues' recent work on the "Parable of Google Flu"), I nevertheless think data journalists should strive to make them their own practice as well.
Data journalists should be open in making their data and analysis available to all comers. This flies in the face of traditions and professional anxieties surrounding autonomy, scooping, and the protection of sources. But how can claims be evaluated as true unless they can be inspected? If I ask a data journalist for her data or code, is she bound by the same norms as a scientist to share it? Where and how should journalists share and document these code and data?
Data journalists should be open in soliciting and publishing feedback. Sure, journalists are used to clearing their story with an editor, but have they solicited an expert's evaluation of their claims? How willing are they to publish critiques of, commentary on, or revisions to their findings? If not, what are the venues for these discussions? How should a reporter or editor manage such a system?
The Guardian's DataBlog and ProPublica have each been doing exemplary work in posting their datasets, code, and other tools for several years. Other organizations like the Sunlight Foundation develop outstanding tools to aid reporters and activists, the Knight Foundation has been funding exciting projects around journalism innovation for years, and the Data Journalism Handbook reviews other excellent cases as well. My former colleague, Professor Richard Gordon at Medill reminded me ideas around "computer assisted reporting" have been in circulation in the outer orbits of journalism for decades. For example, Philip Meyer has been (what we would now call) evangelizing since the 1970s for "precision journalism" in which journalists adopt the tools and methods of the social and behavioral sciences as well as its norms of sharing data and replicating research. Actually, if you stopped reading now and promised to read his 2011 Hedy Lamarr Lecture, I won't even be mad.
The remainder of this post is an attempt demonstrate some ideas of what an "open collaboration" model for data journalism might look like. To that end, this article tries to do many things for many audiences which admittedly makes it hard for any single person to read. Let me try to sketch some of these out now and send you off in the right path.
In this outro to a very unusual introduction, I want to thank Professor Gordon from above, Professor Deen Freelon, Nathan Matias, and Alex Leavitt for their invaluable feedback on earlier drafts of this... post? article? piece? notebook?
Walk Hickey published an article on April 1 on FiveThirtyEight, titled The Dollar-And-Cents Case Against Hollywood’s Exclusion of Women. The article examines the relationship between movies' finances and their portrayals of women using a well-known heuristic call the Bechdel test. The test has 3 simple requirements: a movie passes the Bechdel test if there are (1) two women in it, (2) who talk to each other, (3) about something besides a man.
Let me say at the outset, I like this article: It identifies a troubling problem, asks important questions, identifies appropriate data, and brings in relevant voices to speak to these issues. I should also include the disclaimer that I am not an expert in the area of empirical film studies like Dean Keith Simonton or Nick Redfern. I've invested a good amount time in criticizing the methods and findings of this article, but to Hickey's credit, I also haven't come across any scholarship that has attempted to quantify this relationship before: this is new knowledge about the world. Crucially, it speaks to empirical scholarship that has exposed how films with award-winning female roles are significantly less likely to win awards themselves [5], older women are less likely to win awards [6], actresses' earnings peak 17 years earlier than actors' earnings [7], and differences in how male and female critics rate films [8]. I have qualms about the methods and others may be justified in complaining it overlooks related scholarship like those I cited above, but this article is in the best traditions of journalism that focuses our attention on problems we should address as a society.
Hickey's article makes two central claims:
- We found that the median budget of movies that passed the test...was substantially lower than the median budget of all films in the sample.
- We found evidence that films that feature meaningful interactions between women may in fact have a better return on investment, overall, than films that don’t.
I call Claim 1 the "Budgets Differ" finding and Claim 2 the "Earnings Differ" finding. The results, as they're summarized here are relatively straightforward to test whether there's an effect of Bechdel scores on earnings and budget controlling for other explanatory variables.
But before I even get to running the numbers, I want to examine the claims Hickey made in the article. The interpretations he makes about the return on investment are particularly problematic interpretations of basic statistics. Hickey reports the following findings from his models (emphasis added).
We did a statistical analysis of films to test two claims: first, that films that pass the Bechdel test — featuring women in stronger roles — see a lower return on investment, and second, that they see lower gross profits. We found no evidence to support either claim.
On the first test, we ran a regression to find out if passing the Bechdel test corresponded to lower return on investment. Controlling for the movie’s budget, which has a negative and significant relationship to a film’s return on investment, passing the Bechdel test had no effect on the film’s return on investment. In other words, adding women to a film’s cast didn’t hurt its investors’ returns, contrary to what Hollywood investors seem to believe.
The total median gross return on investment for a film that passed the Bechdel test was $2.68 for each dollar spent. The total median gross return on investment for films that failed was only $2.45 for each dollar spent.
...On the second test, we ran a regression to find out if passing the Bechdel test corresponded to having lower gross profits — domestic and international. Also controlling for the movie’s budget, which has a positive and significant relationship to a film’s gross profits, once again passing the Bechdel test did not have any effect on a film’s gross profits.
Both models (whatever their faults, and there are some as we will explore in the next section) apparently produce an estimate that the Bechdel test has no effect on a film's financial performance. That is to say, the statistical test could not determine with a greater than 95% confidence that the correlation between these two variables was greater or less than 0. Because we cannot confidently rule out the possibility of there being zero effect, we cannot make any claims about its direction.
Hickey argues that passing the test "didn't hurt its investors' returns", which is to say there was no significant negative relationship, but neither was there a significant positive relationship: The model provides no evidence of a positive correlation between Bechdel scores and financial performance. However, Hickey switches gears an in the conclusions, writes:
...our data demonstrates that films containing meaningful interactions between women do better at the box office than movies that don’t...
I don't know what analysis supports this interpretation. The analysis Hickey just performed, again taking the findings at their face, concluded that "passing the Bechdel test did not have any effect on a film’s gross profits" *not* "passing the Bechdel test increased the film's profits." While Bayesians will cavil about frequentist assumptions --- as they are wont to do --- and the absence of evidence is not evidence of absence, the "Results Differ finding" is not empirically supported in any appropriate interpretation of the analysis. The appropriate conclusion from Hickey's analysis is "there no relationship between the Bechdel test and financial performance," which he makes... then ignores.
What to make of this analysis? In the next section, I summarize the findings of my own analysis of the same data. In the subsequent sections, I attempt to replicate the findings of this article, and in so doing, highlight the perils of reporting statistical findings without adhering to scientific norms.
I tried to retrieve and re-analyze the data that Hickey described in his article, but came to some conclusions that were the same, others that were very different, and still others that I hope are new.
In the absence of knowing the precise methods used but making reasnable assumptions of what was done, I was able to replicate some of his findings, but not others because specific decisions had to be made about the data or modeling that dramatically change the results of the statistical models. However, the article provides no specifics so we're left to wonder when and where these findings hold, which points to the need for openness in sharing data and code. Specifically, while Hickey found that women's representation in movies had no significant relationship on revenue, I found a positive and significant relationship.
But the questions and hypotheses Hickey posed about systematic biases in Hollywood were also the right ones. With a reanalysis using different methods as well as adding in new data, I found statistically significant differences in popular ratings also exist. These differences persist after controlling for each other and in the face of other potential explanations about differences arising because of genres, MPAA ratings, time, and other effects.
In the image below, we see that movies that have non-trivial women's roles get 24% lower budgets, make 55% more revenue, get better reviews from critics, and face harsher criticism from IMDB users. Bars that are faded out mean my models are less confident about these findings being non-random (higher p-values) while bars that are darker mean my models are more confident that this is a significant finding (lower p-values).
Movies passing the Bechdel test (the red bars):
...receive budgets that are 24% smaller
...make 55% more revenue
...are awarded 1.8 more Metacritic points by professional reviewers
...are awarded 0.12 fewer stars by IMDB's amateur reviewers
Image(filename='Takeaway.png')
In the sections below, I provide all the code I used to scrape data from the websites Hickey mentioned as sources as well as some others I use to extend his analysis to include other control measures commonly used in statstical models of movie performance (for example, Nagle & Riedl's work on movie ratings). The goal here is to attempt to replicate Hickey's analyses using the same data and same methods he described. I want to emphasize three points ahead of time:
What are the data? Through this process of obtaining the data, it becomes clear that the analyst quickly has to make choices about the types of data to be obtained. Does the data come from Table X or Table Y? Does budget data include marketing expenditures or not? Are these inflation-corrected real dollars or nominal dollars? They should be the same, but there are things missing in one that aren't missing in the other. These decisions are not documented in the article nor are these data made available, both of which makes it hard to determine what exactly is being measured.
What are the variables? The article also creates variables such as "return on investment" and "gross profits" from other variables in the data. These data can be highly-skewed or contain missing data which can break some statistical assumptions -- how were these dealt with? The article doesn't actually say how these variables are constructed or where they come from, so I could be wrong but the data and tradition definitions of these variables present obvious candidates involving different relationships between the same two variables Budget (Expenses) and Income (Revenue). In creating a new variable that combines two other variables, the behavior of this new variable is intrinsically related to the other variables. This can become problematic very quickly, as we will see in the next step.
What are the models? The article then performs analyses on these new and old variables using methods that assume they should be independent. As the previous bullet emphasizes, "return on investment" and "gross profits" are financial performance outcomes that already capture a relationship between Budget and Income. The model could measure these outcomes as a function of the Bechdel score alone. The model could also estimate basic Income as a function of Bechdel plus throw in Budget as a control. The model might still also could try to estimate the combined financial performance as a function of Bechdel controlling for Budget again, even though is already in the outcome. The write-up makes it seem like the last model is the one used, which is problematic.
import numpy as np
import pandas as pd
import seaborn as sb
import json,urllib2,string
from itertools import product
from matplotlib.collections import LineCollection
import statsmodels.formula.api as smf
from bs4 import BeautifulSoup
from scipy.stats import chisquare
from IPython.display import Image
Revenue data taken from scraping http://www.the-numbers.com/movies/#tab=letter
Hickey's article uses data about international receipts, which doesn't appear to be available publicly from this website, so I cannot replicate these analyses. Indeed, if The-Numbers.com provided Hickey with the data, it may very well be different in extent and detail than what is possible to access from their website.
# This may take a bit of time
revenue_df_list = list()
for character in string.printable[1:36]:
soup = BeautifulSoup(urllib2.urlopen('http://www.the-numbers.com/movies/letter/{0}'.format(character)).read())
archive = pd.read_html(soup.findAll('table')[0].encode('latin1'),header=0,flavor='bs4',infer_types=False)[0]
archive.columns = ['Released','Movie','Genre','Budget','Revenue','Trailer']
archive['Released'] = pd.to_datetime(archive['Released'],format=u"%b\xa0%d,\xa0%Y",coerce=True)
archive = archive.replace([u'\xa0',u'nan'],[np.nan,np.nan])
archive['Budget'] = archive['Budget'].dropna().str.replace('$','').apply(lambda x:int(x.replace(',','')))
archive['Revenue'] = archive['Revenue'].dropna().str.replace('$','').apply(lambda x:int(x.replace(',','')))
archive['Year'] = archive['Released'].apply(lambda x:x.year)
archive.dropna(subset=['Movie'],inplace=True)
revenue_df_list.append(archive)
numbers_df = pd.concat(revenue_df_list)
numbers_df.reset_index(inplace=True,drop=True)
numbers_df.to_csv('revenue.csv',encoding='utf8')
numbers_df = pd.read_csv('revenue.csv',encoding='utf8',index_col=0)
numbers_df['Released'] = pd.to_datetime(numbers_df['Released'],unit='D')
numbers_df.tail()
Released | Movie | Genre | Budget | Revenue | Trailer | Year | |
---|---|---|---|---|---|---|---|
19967 | 1979-12-14 | Zulu Dawn | NaN | NaN | 0 | NaN | 1979 |
19968 | 2003-02-07 | Zus & Zo | NaN | NaN | 49468 | NaN | 2003 |
19969 | 2007-04-06 | Zwartboek | Thriller/Suspense | 22000000 | 4398532 | NaN | 2007 |
19970 | 2014-02-28 | Zwei Leben | Drama | NaN | 39673 | NaN | 2014 |
19971 | 2006-02-24 | Zyzzyx Rd. | Thriller/Suspense | NaN | 20 | NaN | 2006 |
5 rows × 7 columns
Here I tried to write some code to get historical CPI data from the BLS API, but I couldn't get it to work. Instead, I downloaded the January monthly data from 1913 through 2014 for Series ID CUUR0000SA0
which is the "CPI - All Urban Consumers" and saved it as a CSV. The inflator
dictionary provides a mapping that lets me to adjust the dollars reported in previous years for inflation: a dollar in 1913 bought 23.8 times more goods and services than a dollar in 2014.
cpi = pd.read_csv('cpi.csv',index_col='Year')
inflator = dict(cpi.ix[2014,'Annual']/cpi['Annual'])
Documentation here: http://bechdeltest.com/api/v1/doc#getMovieByImdbId
First, download a list of all movie IDs using the getAllMovieIds
method. Inspect the first 5. Create a list of IMDB ids out of this list of dictionaries to pass to the API.
movie_ids = json.loads(urllib2.urlopen('http://bechdeltest.com/api/v1/getAllMovieIds').read())
imdb_ids = [movie[u'imdbid'] for movie in movie_ids]
print u"There are {0} movies in the Bechdel test corpus".format(len(imdb_ids))
There are 5062 movies in the Bechdel test corpus
Pull an example getMovieByImdbId
from the API.
json.loads(urllib2.urlopen('http://bechdeltest.com/api/v1/getMovieByImdbId?imdbid=0000091').read())
{u'date': u'2013-12-25 14:31:21', u'dubious': u'0', u'id': u'4982', u'imdbid': u'0000091', u'rating': u'0', u'submitterid': u'9025', u'title': u'House of the Devil, The', u'visible': u'1', u'year': u'1896'}
Pull all of the data we can down. *This will take a very long time.* We don't want to have to repeat that crawl again, so let's write the data to disk as bechdel.json
ratings = dict()
exceptions = list()
for num,imdb_id in enumerate(imdb_ids):
try:
if num in range(0,len(imdb_ids),int(round(len(imdb_ids)/10,0))):
print imdb_id
ratings[imdb_id] = json.loads(urllib2.urlopen('http://bechdeltest.com/api/v1/getMovieByImdbId?imdbid={0}'.format(imdb_id)).read())
except:
exceptions.append(imdb_id)
pass
with open('bechdel.json', 'wb') as f:
json.dump(ratings.values(), f)
Read the data back from disk and convert to ratings_df
.
ratings_df = pd.read_json('bechdel.json')
ratings_df['imdbid'] = ratings_df['imdbid'].dropna().apply(int)
ratings_df = ratings_df.set_index('imdbid')
ratings_df.dropna(subset=['title'],inplace=True)
ratings_df2 = ratings_df[['rating','title']]
ratings_df2.head()
rating | title | |
---|---|---|
imdbid | ||
435528 | 2 | Whisper |
1456472 | 1 | We Have a Pope |
435680 | 3 | Kidulthood |
460721 | 3 | Big Bad Swim, The |
1571222 | 3 | A Dangerous Method |
5 rows × 2 columns
The field we want is rating
which is defined as:
"The actual score. Number from 0 to 3.
0.0 means no two women,
1.0 means no talking,
2.0 means talking about a man,
3.0 means it passes the test."
Pull an example from http://www.omdbapi.com/.
json.loads(urllib2.urlopen('http://www.omdbapi.com/?i=tt0000091').read())
{u'Actors': u"Jeanne d'Alcy, Georges M\xe9li\xe8s", u'Awards': u'N/A', u'Country': u'France', u'Director': u'Georges M\xe9li\xe8s', u'Genre': u'Short, Horror', u'Language': u'N/A', u'Metascore': u'N/A', u'Plot': u'A bat flies into an ancient castle and transforms itself into Mephistopheles himself. Producing a cauldron, Mephistopheles conjures up a young girl and various supernatural creatures, one ...', u'Poster': u'N/A', u'Rated': u'N/A', u'Released': u'N/A', u'Response': u'True', u'Runtime': u'3 min', u'Title': u'The House of the Devil', u'Type': u'movie', u'Writer': u'Georges M\xe9li\xe8s', u'Year': u'1896', u'imdbID': u'tt0000091', u'imdbRating': u'6.8', u'imdbVotes': u'768'}
Crawl the data. *This will take a while.* Write the data to disk when you're done as imdb_data.json
.
imdb_data = dict()
exceptions = list()
for num,imdb_id in enumerate(imdb_ids):
try:
if num in range(0,len(imdb_ids),int(round(len(imdb_ids)/10,0))):
print imdb_id
imdb_data[imdb_id] = json.loads(urllib2.urlopen('http://www.omdbapi.com/?i=tt{0}'.format(imdb_id)).read())
except:
exceptions.append(imdb_id)
pass
with open('imdb_data.json', 'wb') as f:
json.dump(imdb_data.values(), f)
Read the data back from disk and convert to imdb_df
. Print out an example row, but transpose it because there are a lot of columns.
# Read the data in
imdb_df = pd.read_json('imdb_data.json')
# Drop non-movies
imdb_df = imdb_df[imdb_df['Type'] == 'movie']
# Convert to datetime objects
imdb_df['Released'] = pd.to_datetime(imdb_df['Released'], format="%d %b %Y", unit='D', coerce=True)
# Drop errant identifying characters in the ID field
imdb_df['imdbID'] = imdb_df['imdbID'].str.slice(start=2)
More cleanup happens below.
# Remove the " min" at the end of Runtime entries so we can convert to ints
imdb_df['Runtime'] = imdb_df['Runtime'].str.slice(stop=-4).replace('',np.nan)
# Some errant runtimes have "h" in them. Commented-out code below identifies them.
#s = imdb_df['Runtime'].dropna()
#s[s.str.contains('h')]
# Manually recode these h-containing Runtimes to minutes
imdb_df.ix[946,'Runtime'] = '169'
imdb_df.ix[1192,'Runtime'] = '96'
imdb_df.ix[1652,'Runtime'] = '80'
imdb_df.ix[2337,'Runtime'] = '87'
imdb_df.ix[3335,'Runtime'] = '62'
# Blank out non-MPAA or minor ratings (NC-17, X)
imdb_df['Rated'] = imdb_df['Rated'].replace(to_replace=['N/A','Not Rated','Approved','Unrated','TV-PG','TV-G','TV-14','TV-MA','NC-17','X'],value=np.nan)
# Convert Release dateimte into new columns for year, month, and week
imdb_df['Year'] = imdb_df['Released'].apply(lambda x:x.year)
imdb_df['Month'] = imdb_df['Released'].apply(lambda x:x.month)
imdb_df['Week'] = imdb_df['Released'].apply(lambda x:x.week)
# Convert the series to float
imdb_df['Runtime'] = imdb_df['Runtime'].apply(float)
# Take the imdbVotes formatted as string containing "N/A" and comma-delimited thousands, convert to float
imdb_df['imdbVotes'] = imdb_df['imdbVotes'].dropna().replace('N/A',np.nan).dropna().apply(lambda x:float(x.replace(',','')))
# Take the Metascore formatted as string containing "N/A", convert to float
# Also divide by 10 to make effect sizes more comparable
imdb_df['Metascore'] = imdb_df['Metascore'].dropna().replace('N/A',np.nan).dropna().apply(float)/10.
# Take the imdbRating formatted as string containing "N/A", convert to float
imdb_df['imdbRating'] = imdb_df['imdbRating'].dropna().replace('N/A',np.nan).dropna().apply(float)
# Create a dummy variable for English language
imdb_df['English'] = (imdb_df['Language'] == u'English').astype(int)
imdb_df['USA'] = (imdb_df['Country'] == u'USA').astype(int)
# Convert imdb_ID to int, set it as the index
imdb_df['imdbID'] = imdb_df['imdbID'].dropna().apply(int)
imdb_df = imdb_df.set_index('imdbID')
df = imdb_df.join(ratings_df2,how='inner').reset_index()
df = pd.merge(df,numbers_df,left_on=['Title','Year'],right_on=['Movie','Year'])
df['Year'] = df['Released_x'].apply(lambda x:x.year)
df['Adj_Revenue'] = df.apply(lambda x:x['Revenue']*inflator[x['Year']],axis=1)
df['Adj_Budget'] = df.apply(lambda x:x['Budget']*inflator[x['Year']],axis=1)
df.tail(2).T
2624 | 2625 | |
---|---|---|
imdbID | 1531663 | 282771 |
Actors | Will Ferrell, Christopher Jordan Wallace, Rebe... | Om Puri, Aasif Mandvi, Ayesha Dharker, Jimi Mi... |
Awards | N/A | 1 win. |
Country | USA | UK, India, USA |
Director | Dan Rush | Ismail Merchant |
Error | NaN | NaN |
Genre_x | Comedy, Drama | Comedy, Drama |
Language | English | English |
Metascore | 6.5 | 6.4 |
Plot | When an alcoholic relapses, causing him to los... | In 1950s Trinidad, a frustrated writer support... |
Poster | http://ia.media-imdb.com/images/M/MV5BMjEwMjI0... | http://ia.media-imdb.com/images/M/MV5BMjEzNDIy... |
Rated | R | PG |
Released_x | 2011-10-14 00:00:00 | 2002-03-29 00:00:00 |
Response | True | True |
Runtime | 97 | 117 |
Title | Everything Must Go | The Mystic Masseur |
Type | movie | movie |
Writer | Dan Rush, Raymond Carver (short story "Why Don... | V.S. Naipaul (novel), Caryl Phillips (screenplay) |
Year | 2011 | 2002 |
imdbRating | 6.5 | 5.9 |
imdbVotes | 31814 | 393 |
Month | 10 | 3 |
Week | 41 | 13 |
English | 1 | 1 |
USA | 1 | 0 |
rating | 1 | 2 |
title | Everything Must Go | Mystic Masseur, The |
Released_y | 2011-05-13 00:00:00 | 2002-05-03 00:00:00 |
Movie | Everything Must Go | The Mystic Masseur |
Genre_y | Drama | NaN |
Budget | 5000000 | NaN |
Revenue | 2712131 | 398610 |
Trailer | Play | NaN |
Adj_Revenue | 2880766 | 526489.3 |
Adj_Budget | 5310889 | NaN |
35 rows × 2 columns
One question that both Hickey and others have posed is whether the Bechdel site data is biased in some way. We can do a quick check to see if the distributions of the movies in the joined dataset (where movies are only included if they appear across datasets) and the raw Bechdel data. There's clearly a bias towards the Bechdel community covering movies that pass the test and potentially omitting data about movies that do not. This is problematic, but unless we want to watch these movies and code them ourselves, there's not much to be done. But in terms of the data we use for our analysis, it doesn't appear to be the case that using the ...
representative_df = pd.DataFrame()
# Group data by scores
representative_df['All Bechdel'] = ratings_df2.groupby('rating')['title'].agg(len)
representative_df['Analysis Subset'] = df.groupby('rating')['title'].agg(len)
# Convert values of data to fractions of total
representative_df2 = representative_df.apply(lambda x: x/x.sum(),axis=0)
# Make the plot
representative_df2.plot(kind='barh')
plt.legend(fontsize=15,loc='lower right')
plt.yticks(plt.yticks()[0],["Fewer than two women",
"Women don't talk to each other",
'Women only talk about men',
'Passes Bechdel Test'
],fontsize=15)
plt.xticks(fontsize=15)
plt.ylabel('')
plt.xlabel('Fraction of movies',fontsize=18)
plt.title('Does the analyzed data differ from the original data?',fontsize=18)
<matplotlib.text.Text at 0x10f448cd0>
Performing a $\chi^2$-test to see if these differences in these frequencies are significant, we find they are (p-values reported below). This suggests that the distribution of movies across Bechdel categories in the combined data we analyze are significantly different than the raw data from the site -- or there's a risk that the day we're using in the full analysis isn't representative of the data on the Bechdel site itself. Alternatively, the Bechdel site's data may be biased away from the expected distribution. $\chi^2(3,n=4) = 27.78, p < 0.01$
# The observed data are the counts of the number of movies in each of the different categories for the Analysis subset.
# The expected data are the fractions of the number of movies in the original Bechdel data,
# multiplied by the number of movies in the Analysis subset.
chisquare(f_obs=representative_df['Analysis Subset'].as_matrix(),
f_exp=representative_df2['All Bechdel'].as_matrix()*representative_df['Analysis Subset'].sum())
(27.784302084004732, 4.0310508706887412e-06)
Now that we have data in hand about movies, their financial performance, and their performance on the Bechdel test from the data sources that Hickey descibed in the article, we can attempt to replicate the specific variables used. Because the article does not make clear which data or definitions were used for making these constructs, we're left to infer what exactly Hickey did.
The article claims to use two specific outcomes: return on investment and gross profits which are different ways of accounting for the relationship between income and costs. I don't claim to be an accounting or business expert, but "gross profit" is traditionally defined as "income minus costs" ($P = I - C$) while "return on investment" (RoI) is traditionally defined as profits divided by assets ($ROI = P/A$). Thus we need at least three variables: income, costs, and assets.
However, only two of these are available in the movie financial data we obtained from The-Numbers.com: income and costs. We can calculate profits easily, but it's unclear what a movie's assets are here unless we use costs again. Again, Hickey may have done something different for make this variable, but we don't know so we can't replicate.
df['Profit'] = df['Adj_Revenue'] - df['Adj_Budget']
df['ROI'] = df['Profit']/df['Adj_Budget']
We can do some exploratory analysis, First, we might want to plot the change in the average Bechdel score over time. Of course, the number of movies that have been released has also increased in recent years. So we'll change the size of the line by the (logged) number of movies released that year. We observe a general upward trend but a lot of noise in the early years owing the sparsity of the data.
# Do some fancy re-indexing and groupby operations to get average scores and number of movies
avg_bechdel = df.set_index('Released_x').groupby(lambda x:x.year)['rating'].agg(np.mean)
num_movies = df.set_index('Released_x').groupby(lambda x:x.year)['rating'].agg(lambda x:np.log(len(x)+1))
# Changing line widths from: http://stackoverflow.com/questions/19862011/matplotlib-change-linewidth-on-line-segments-using-list
coords = sorted(dict(avg_bechdel).items(),key=lambda x:x[0])
lines = [(start, end) for start, end in zip(coords[:-1], coords[1:])]
lines = LineCollection(lines, linewidths=dict(num_movies).values())
# Plot the figure
fig, ax = plt.subplots()
ax.add_collection(lines)
plt.autoscale()
plt.xlim((1912,2020))
plt.xlabel('Time',fontsize=18)
plt.ylabel('Avg. Bechdel score',fontsize=18)
<matplotlib.text.Text at 0x10f7696d0>
We can also try to replicate the stacked bar chart that Hickey used to show the distributions of different classes of movies by decade. I find this harder to read and interpret, but I reproduce it below.
df_rating_ct = pd.crosstab(df['Year'],df['rating'])
# Bakes in a lot here and is bad Pythonic form -- but I'm lazy
# http://stackoverflow.com/questions/17764619/pandas-dataframe-group-year-index-by-decade
# http://stackoverflow.com/questions/21247203/how-to-make-a-pandas-crosstab-with-percentages
# http://stackoverflow.com/questions/9938130/plotting-stacked-barplots-on-a-panda-data-frame
df_rating_ct.groupby((df_rating_ct.index//10)*10).sum().apply(lambda x: x/x.sum(),axis=1).ix[1930:].plot(kind='bar',stacked=True)
# Fix up the plot
plt.ylabel('Percentage of movies',fontsize=18)
plt.yticks(np.arange(0,1.1,.10),np.arange(0,110,10),fontsize=15)
plt.xlabel('Decade',fontsize=18)
plt.xticks(rotation=0,fontsize=15)
plt.legend(loc='center',bbox_to_anchor=(1.1,.5),fontsize=15)
<matplotlib.legend.Legend at 0x10f7d72d0>
We can also estimate a statistical model to forecast future changes in the average Bechdel score over time. We observe a general upward trend in movies passing more of the Bechdel test and can try to extrapolate this going forward.
avg_bechdel = df.set_index('Released_x').groupby(lambda x:x.year)['rating'].agg(np.mean)
avg_bechdel.index.name = 'date'
avg_bechdel.name = 'bechdel'
avg_bechdel = avg_bechdel.reset_index()
l_model = smf.ols(formula='bechdel ~ date', data=avg_bechdel).fit()
#print l_model.summary()
# Create a DataFrame "X" that we can extrapolate into
X = pd.DataFrame({"date": np.linspace(start=1912.,stop=2100.,num=100)})
# Plot the observed data
plt.plot(avg_bechdel['date'],avg_bechdel['bechdel'],c='b',label='Observed')
# Plot the predictions from the model
plt.plot(X,l_model.predict(X),c='r',label='Linear model',lw=4)
plt.legend(loc='lower right')
plt.ylim((0,3))
plt.xlim((1912,2100))
plt.xlabel('Year')
plt.ylabel('Avg. Bechdel Test')
plt.xlabel('Year',fontsize=18)
plt.ylabel('Avg. Bechdel Test',fontsize=18)
<matplotlib.text.Text at 0x10bef8e90>
Re-arrange the $y=Bx+I$ equation to solve for $y=3$: $x=(y-I)/B$, or in other words, find the time in the future when the average movie will pass the Bechdel test.
Then convert the float into a proper date and return the day of the week as well (Mondays are 0).
import datetime
year_est = (3 - l_model.params['Intercept'])/l_model.params['date']
year_int = int(year_est)
d = datetime.timedelta(days=(year_est - year_int)*365.25)
day_one = datetime.datetime(year_int,1,1)
date = d + day_one
print date, date.weekday()
2089-08-30 04:54:43.147324 1
Extrapolating this linear model forward in time, on Tuesday, August 30, 2089, the *average* movie will finally pass the Bechdel test. Just 75 years to go even the average summer blockbuster will have minimally-developed female characters! Hooray!
But there could also be other things going on in the model. Maybe the rate at which movies passing the Bechdel test is accelerating or decelerating instead of changing constantly. We estimate a model with a quadratic term on time below. This model returns a concave function which suggests we've apparently passed "Peak Bechdel" back in the 1990s and we're well on our way regressing towards a misogynist cultural dystopia in the future.
But we shouldn't put too much stock in such simple models -- especially when they make such divergent predictions about the 2090s. However, the lack of progress and perhaps the presence of even a slowdown in the improvement in Bechdel scores over time should be a cause for concern.
q_model = smf.ols(formula='bechdel ~ date + I(date**2)', data=avg_bechdel).fit()
# Create a DataFrame "X" that we can extrapolate into
X = pd.DataFrame({"date": np.linspace(start=1912.,stop=2112.,num=201)})
# Plot the observed data
plt.plot(avg_bechdel['date'],avg_bechdel['bechdel'],c='b',label='Observed')
# Plot the predictions from the model
plt.plot(X,q_model.predict(X),c='r',label='Quadratic model',lw=4)
plt.legend(loc='upper right')
plt.ylim((0,3))
plt.xlim((1912,2112))
plt.xlabel('Year',fontsize=18)
plt.ylabel('Avg. Bechdel Test',fontsize=18)
<matplotlib.text.Text at 0x10bf4ddd0>
As I discussed above, the claim that budgets for Bechdel-passing movies is relatively uncontroversial and given the description of the data in the article, should be easy to reproduce. Visualizing the distribution of the data, there are no strong differences that jump out but if you squint hard enough, you can make out a negative trend: movies passing the Bechdel test have lower budgets.
You'll notice each cloud of points around 0, 1, 2, and 3. These are simply "jittered" by adding a bit of normally-distributed errors to the plots (but not the underlying data we're estimating) to show the frequency of datapoints without them all sitting on top of each other.
x = df['rating'].apply(lambda x:float(x)+np.random.normal(0, 0.05))
y = df['Adj_Budget']
# Plot with an alpha so the overlaps reveal something about relative density
plt.scatter(x,y,alpha=.2,label='Data',c='g')
plt.ylabel('Budgets (2014 $)',fontsize=18)
plt.xlabel('Bechdel dimensions',fontsize=18)
plt.xticks(np.arange(0,4))
plt.yscale('log')
plt.grid(False,which='minor')
#plt.ylim((2e0,1e10))