All posts by bkeegan

The Beneficence of Mobs: A Facebook Apologia

Last week, the Proceedings of the National Academy of Science (PNAS) published a study that conducted a large-scale experiment on Facebook. The authors of the study included an industry researcher from Facebook as well as academics at the University of California, San Francisco and Cornell University. The study employed an experimental design that reduced the amount of positive or negative emotional content in 689,000 Facebook users’ news feeds to test whether emotions are contagious. The study has since spawned a substantial controversy about the methods used, extent of its regulation by academic institutions’ review board, the nature of participants’ informed consent, the ethics of the research design itself, and the need for more explicit opt-in procedures.

In the face of even-tempered thinking from a gathering mob, I want to defend the execution and implications of this study. Others have also made similar arguments [1,2,3], I guess I’m just a slow blogger. At the outset, I want to declare that I have no direct stake in the outcome of this brouhaha. However, I do have professional and personal relationships with several members of the Facebook Data Science team (none of whom are authors on the study), although the entirety of this post reflects only public information and my opinions alone.

First, as is common in the initial reporting surrounding on scientific findings, there was some misinformation around the study that was greatly magnified. These early criticisms claimed the authors mis-represented the size of the observed effects (they didn’t) or the research wasn’t reviewed by the academic boards charged with human subjects protection (it was). There is likewise a pernicious tendency for the scientific concept of experimental manipulation to be misinterpreted as the homophone implying deception and chicanery: there is no inherent maliciousness in randomly assigning participants to conditions for experimental study. Other reporting on the story has sensationalistically implied users were subjected to injections of negative emotional content so their resulting depression could be more fully quantified. In reality, the study actually only withheld either positive or negative content from users, which resulted in users seeing more of posts they would have seen anyway. In all of these, the hysteria surrounding a “Facebook manipulates your emotions” or is “transmitting anger” story got well ahead of any sober reading of the research reported by the authors in the paper.

Second on the substance of the research, there are still serious questions about the validity of methodological tools used , the interpretation of results, and use of inappropriate constructs. Prestigious and competitive peer-reviewed journals like PNAS are not immune from publishing studies with half-baked analyses. Pre-publication peer review (as this study went through) is important for serving as a check against faulty or improper claims, but post-publication peer review of scrutiny from the scientific community—and ideally replication—is an essential part of scientific research. Publishing in PNAS implies the authors were seeking both a wider audience and a heightened level of scrutiny than publishing this paper in a less prominent outlet. To be clear: this study is not without its flaws, but these debates, in of themselves, should not be taken as evidence that the study is irreconcilably flawed. If the bar for publication is anticipating every potential objection or addressing every methodological limitation, there would be precious little scholarship for us to discuss. Debates about the constructs, methods, results, and interpretations of a study are crucial for synthesizing research across disciplines and increasing the quality of subsequent research.

Third, I want to move to the issue of epistemology and framing. There is a profound disconnect in how we talk about the ways of knowing how systems like Facebook work and the ways of knowing how people behave. As users, we expect these systems to be responsive, efficient, and useful and so companies employ thousands of engineers, product managers, and usability experts to create seamless experiences.  These user experiences require diverse and iterative methods, which include A/B testing to compare users’ preferences for one design over another based on how they behave. These tests are pervasive, active, and on-going across every conceivable online and offline environment from couponing to product recommendations. Creating experiences that are “pleasing”, “intuitive”, “exciting”, “overwhelming”, or “surprising” reflects the fundamentally psychological nature of this work: every A/B test is a psych experiment.

Somewhere deep in the fine print of every loyalty card’s terms of service or online account’s privacy policy is some language in which you consent to having this data used for “troubleshooting, data analysis, testing, research,” which is to say, you and your data can be subject to scientific observation and experimentation. Whether this consent is “informed” by the participant having a conscious understanding of implications and consequences is a very different question that I suspect few companies are prepared to defend. But why does a framing of “scientific research” seem so much more problematic than contributing to “user experience”? How is publishing the results of one A/B test worse than knowing nothing of the thousands of invisble tests? They reflect the same substantive ways of knowing “what works” through the same well-worn scientific methods.

Fourth, there has been no substantive discussion of what the design of informed consent should look like in this context. Is it a blanket opt-in/out to all experimentation? Is consent needed for every single A/B iteration or only those intended for scientific research? Is this choice buried alongside all the other complex privacy buttons or are users expected to manage pop-ups requesting your participation? I suspect the omnipresent security dialogues that Windows and OS X have adopted to warn us against installing software have done little to reduce risky behavior. Does adding another layer of complexity around informed consent improve the current anxieties around managing complex privacy settings? How would users go about differentiating official requests for informed consent from abusive apps, spammers, and spoofers? Who should be charged with enforcing these rules and who are they in turn accountable to? There’s been precious little on designing more informed consent architectures that balance usability, platform affordances, and the needs of researchers.

Furthermore, we might also consider the  ethics of this nascent socio-technical NIMBYism. Researchers at Penn State have looked at the design of privacy authorization dialogues for social networks but found that more fine-grained control over disclosure reduced adoption levels.  We demand ever more responsive and powerful systems while circumscribing our contributions but demanding benefits from other’s contributions. I image the life of such systems would be poor, nasty, brutish, and short. Do more obtrusive interventions or incomplete data collection in the name conservative interpretations of informed consent promote better science and other public goods? What are the specific harms that we should strive to limit in these systems and how might we re-tailor 40 year old policies to these ends?

I want to wrap up by shifting the focus of this conversation from debates about a study that was already done to what should be done going forward. Some of the more extreme calls I’ve seen have advocated for academic societies or institutions to investigate and discipline the authors, others have called for embargoing studies using Facebook data from scholarly publication, and still others have encouraged Facebook employees to quit in protest of a single study.  All this manning of barricades strikes me as a grave over-reaction that could have calamitously chilling effects on several dimensions.  If our overriding social goal is to minimize real or potential harm to participants, what best accomplishes this going forward?

Certainly expelling Facebook from the “community of scholars” might damage its ability to recruit researchers. But are Facebook users really made safer by replacing its current crop of data scientists who have superlative social science credentials with engineers, marketers, and product managers trying to ride methodological bulls they don’t understand? Does Facebook have greater outside institutional accountability by closing down academic collaborations and shutting papers out from peer review and publication? Are we better able to know the potential influence Facebook wields over our emotions, relationships, and politics by discouraging them from publicly disclosing the tools they have developed?  Is raising online mobs to attack industry researchers conducive to starting dialogues to improve their processes for informed consent? Is publicly undermining other scientists the right strategy for promoting evidence-based policy-making in an increasingly hostile political climate?

Needless to say, this episode speaks for the need for rapprochement and sustained engagement between industry and academic researchers. If you care about research ethics, informed consent, and well-designed research, you want companies like Facebook deeply embedded within and responsible to the broader research community. You want the values of social scientists to influence the practice of data science, engineering,  user experience, and marketing teams. You want the campus to be open to visiting academic researchers to explore, collaborate, and replicate. You want industry research to be held to academia’s more stringent standards of human subjects protection and regularly shared through peer-reviewed publication.

The Facebook emotional contagion study demands a re-evaluation of prevailing research ethics, design values, and algorithmic powers in massive networked architectures. But the current reaction to this study can only have a chilling effect on this debate by removing a unique form responsible disclosure through academic collaboration and publishing. This study is guaranteed to serve as an important case study in the professionalization of data science. But academic researchers should make sure their reactions do not unintentionally inoculate industry against the values and perspectives of social inquiry.

tl;dr:

9z6k1Hb

Data-Driven Dreams

What makes misinformation spread? It’s a topic of vital importance with empirical scholarship going back to 1940s on how wartime rumors spread. Rumors, gossip, and misinformation are pernicious for many reasons, but they can reflect deeply-held desires or are reasonably plausible, which makes them hard to stay ahead of or rebut. I have an interest in the spread of misinformation in social media and have published some preliminary research on the topic. So it was fascinating for me to witness misinformation spread like wildfire through my own academic community as it speaks to our data-driven anxieties and dreams.

What we wish to be true

Scientific American published a brief article dated June 1 (but released a week beforehand) titled “Twitter to Release All Tweets to Scientists.” The article claims “[Twitter] will make all its tweets, dating back to 2006, freely available to researcherseverything is up for grabs.” (emphasis added) This claim appears to refer to the Twitter Data Grants initiative announced on February 5th in which they will “give a handful of research institutions access to our public and historical data.” (emphasis added) On April 17, Twitter announced it had received more than 1,300 proposals form more than 60 different countries, but selected only 6 institutions “to receive free datasets.” There have been no subsequent announcements of another Twitter-sponsored data sharing initiative and the Scientific American article refers to a February announcement by Twitter. The semantics of the article’s title and central claim are not technically false as a corpus of tweets was made available gratis (free as in beer) to a selected set of scientists.

Following in the tradition of popular fact-checking websites such as PolitiFact, I rate this claim MOSTLY FALSE. It is not the case that Twitter has made its historical corpus of tweets available to every qualified researcher. This was the interpretation I suspect many of my friends, collaborators, and colleagues were making. But collectively wishing something to be true doesn’t make it so. In this case, the decision about who does and does not get access to their data was already made on April 17, and Twitter (to my knowledge) has made no public announcements about new initiatives to further open its data to researchers. Nothing appears to have changed in their policies around accessing and storing data through their APIs or purchasing data from authorized resellers such as gnip. There’s no more Twitter data available to the median scientific researcher now than there was a day, a month, or a year ago.

What we wish could be changed

Indeed, by selecting only 6 proposals out of approximately 1,300 submission, this proposal process had an acceptance rate of less than 0.5% — lower than most NSF and NIH grant programs (10-20%), lower than the 5.9% of applicants accepted to Harvard’s Class of 2018, lower than Wal-Mart’s 2.6% acceptance rate for its D.C. store, but about average for Google’s hiring. There is obviously a clear and widespread demand by scientific researchers to use Twitter data for a broad variety of topics, many of which are reflected in the selected teams interests across public health, disaster response, entertainment, and social psychology. But we shouldn’t cheer 99.5% of interested researchers being turned away from ostensibly public data as a victory for “open science” or data accessibility.

A major part of this attraction is the fact that Twitter data is like an illusory oasis in the desert where the succor seems just beyond the next setback. The data contains the content and social network structure of communication exchanges for millions of users spread across the globe with fine-grained metadata that captures information about audience (followers), impact (retweets/favorites), timestamps, and sometimes locations. Of course researchers from public health, to disaster response, to social psychology want to get their hands on it. With my collaborators at Northeastern University and other institutions, we put in two separate Twitter data grant proposals to study both misinformation propagation as well as dynamics around the Boston Marathon bombings. We felt these were worthy topics of research (as have NSF funding panels), but they were unfortunately not selected. So you’re welcome to chalk this post up to my sour grapes, if you’d like.

Twitter’s original call for proposals notes that “it has been challenging for researchers outside the company… to access our public, historical data.” This is the methodological understatement of the decade for my academic collaborators who must either write grants to afford the thousands of dollars in fees resellers charge for this data, build complex computing infrastructures to digest and query the streams of public data themselves, or adopt strategies that come dangerously close to DDoS attacks to get around Twitter’s API rate limits. This is all to simply get raw material of tweets that are inputs to still other processes of cleanup, parsing, feature extraction, and statistical modeling. Researchers unfamiliar with JSON, NoSQL, or HDFS must partner with computer and information scientists who go, at considerable peril to their own methods, interests, and infrastructures, into these data mines.

What should be changed

I said before this data is “ostensibly public”, but Twitter has very good reasons to build the walls around this garden ever higher. Let me be clear: Twitter deserves to be lauded for launching such a program in the first place. Hosting, managing, querying, and structuring these data require expensive technical infrastructures (both the physical boxes and the specialized code) as well as expensive professionals to develop, maintain, and use them. Twitter is a private, for-profit company and its users grant Twitter a license to use, store, and re-sell the things they post to it. So it’s well within its rights to charge or restrict access to this data. Indeed there are foundational issues of privacy and informed consent that should probably discourage us from making tweets freely available as there are inevitably quacks hiding among the quantoids like myself submitting proposals.

It’s also important to reflect on issues that Kate Crawford and danah boyd have raised about inequalities and divides in access to data. Twitter did an admirable job in selecting research institutions that are not solely composed of American research universities with super-elite computer science programs. But it doesn’t alter the structural arrangements in which the social, economic, and political interests of those who (borrowing from Lev Manovich) create these data are distinct from the interests of those who collect, analyze, and now share it with other researchers. The latter group obviously sets the terms of who, what, where, when, why, and how this data is used. Twitter would have good reasons to not provide data grants to researchers interested in criticizing Silicon Valley tech culture or identifying political dissenters, yet this model nevertheless privileges Twitter’s interests above others’ for both bad and good reasons.

Some models for big open data

So what might we do going forward? As I said, I think Twitter and other social media companies should be lauded for investing resources into academic research. While it makes for good press and greases the recruitment pipeline, it can still involve non-trivial costs and headaches that can be risky for manager and make investors unhappy. I think there are a few models that Twitter and similar organizations might consider going forward to better engage with the academic community.

First, on the issue of privacy, social media companies have an obvious responsibility to ensure the privacy of their users and to prevent the misuse of their data. This suggests a “visiting researcher” model in which researchers could conduct their research under strict technical and ethical supervision while having privileged access to both public and private metadata. This is a model akin to what the U.S. Census uses as well as what Facebook has been adopting to deal with the very sensitive data they collect.

Second, organizations could create outward facing “data liaisons” who provide a formal interface between academic communities’ interests and internal data and product teams. These liaisons might be some blend of community management, customer service, and ombudsperson who mobilize, respond to, and advocate for the interests of academics. The Wikimedia Foundation is an exemplar of this model as it has staffers and contractors who liaise with the community as well as analytics staff who assist researchers by building semi-public toolservers.

Third, organizations could publish “data dumps” in archives on a rolling basis. These might be large-scale datasets that are useful for basic tasks (e.g., userid to username key-value pairs),  anonymized data that could be useful in a particular domain (e.g., evolution of the follower graph), or archives of historical data (e.g., data from 5 years ago). The Wikimedia Foundation and StackOverflow both provide up-to-date data dumps that have been invaluable for academic research by reducing the overhead for researchers to scrape these data together by themselves.

Finally, on the issue of academic autonomy and conflicts of interest, social media companies could adopt a “data escrow” model. Twitter would provide the data to a panel of expert reviewers who could then in turn award it to worthy research teams after peer review. This privileges academic autonomy and peer review that has underpinned norms of teaching, publishing, funding, and promotion for decades and would prevent Twitter’s conflicts of interests from biasing legitimate academic research interests. Twitter could rotate between different disciplinary themes such as public health or incentivize interdisciplinary themes like crisis informatics. I haven’t seen a model of this in action before, but let me know of any in the comments or… via Twitter :)

And stop linking to that goddamned Scientific American article and link to this one instead!

Crime, Time, and Weather in Chicago

Can we attribute a fall in violent crime in Chicago to its new conceal and carry laws? This was the argument many conservative blogs and news outlets were making in early April after statistics were released showing a marked drop in the murder rate in Chicago. These articles attributed “Chicago’s first-quarter murder total [hitting] its lowest number since 1958″ to the deterrent effects of Chicago’s concealed and carry permits being issued in late February (RedState). Other conservative outlets latched onto the news to argue the policy “is partly responsible for Chicago’s across-the-board drop in the crime” (TheBlaze) or that the policy contributed to the “murder rate promptly [falling] to 1958 levels” (TownHall).

Several articles hedged about the causal direction of any relationship and pointed out that this change is hard to separate from falling general crime rates as well as the atrocious winter weather this season (PJMedia,WonketteHuffPo) The central claim here is whether the adoption of the conceal and carry policy in March 2014 contributed to significant changes in crime rates rather than other social, historical, or environmental factors.

However, an April 7 feature story by David Bernstein and Noah Isackson in Chicago magazine found substantial evidence of violent crimes like homicides, robberies, burglaries, and assaults being reclassified, downgraded to more minor crimes, and even closed as noncriminal incidents. They argue that after Police Superintendent Garry McCarthy arrived in May 2011, the drop in crime has improbably plummeted in spite of high unemployment and significant contraction in the Chicago Police Department’s beat cops. An audit by Chicago’s inspector general into these crime numbers suggests assaults and batteries may have been underreported by more than 24%. This raises a second question: can we attribute the fall in violent crime in Chicago to systematic underreporting of criminal statistics?

In this post, I do four things:

  • First, I demonstrate the relationship crime has with environmental factors like temperature as well as temporal factors like the hour of the day and day of the week. I use a common technique in signal processing to identify that criminal activity not only follows an annual pattern, but also patterns by day of the week.

Homicides by time of day and year

Temperature and crime

  • Second, I estimate a simple statistical model based on the findings above. This model combines temperature, the day of the week, the week of the year, and longer-term historical trends and despite its simplicity (relative to more advanced types of time series models that could be estimated), does a very good job explaining the dynamics of crime in Chicago over the past 13 years.

Statistical model

  • Third, I use this statistical model to make predictions about crime rates for the rest of 2014. If there’s a significant fall-off in violent crime following the introduction of the conceal and carry policy in March 2014, this could be evidence of its success as a deterrent (or that this is a bad model). But if the actual crime data matches the model’s forecasted trends, it suggests the new conceal and carry policy has had no effect. There are no findings here as yet, but I expect as the data comes in there will be no significant changes after March 2014.

Homicide predictions, 2014

  • Fourth, I find evidence of substantial discrepancies in the reporting some crime data since 2013. This obviously imperils the findings of the analyses done above, but also replicates the findings reported by Bernstein and Isackson. The statistical model above expected that property crimes such as arson, burglary, theft, and robbery should follow a particular pattern, which the observed data significantly deviates from after 2013. I perform some additional analyses to uncover which crimes and reporting districts are driving this discrepancy as well as how severe this discrepancy is.

Crime rate changes by CPD district

Deviation in reported statistics from model predictions

The repo containing the code is available on GitHub.

The complete notebook containing these and related analyses can be found here.

A network analysis of Bob Ross’s paintings

While it makes me look like I’m stalking him, Walt Hickey is doing some of the more fun data analysis on the new FiveThirtyEight. His recent April 13 article examined features from the entire canon of Bob Ross paintings and he happily made this data available within days. The data is stored as a rectangular matrix of episodes or pieces by rows and the specific features as columns. A piece like “Mountain-By-The-Sea” from Season 9, Episode 12 is coded as containing 10 features: “beach”, “bushes”, “clouds”, “cumulus”, “deciduous”, “mountain”, “ocean”, “snowy mountain”, “tree”, and “waves”.

To a network scientist like me, this is a common way of storing relationships. We typically imagine social networks as people connected to other people via friendship links, but we can also create many other kinds of networks like people connected to employers or the ingredients contained within recipes. These latter two networks are called “bipartite networks” because they involve two distinct types of nodes (people and employers, ingredients and recipes). We can create a bipartite network of paintings from Hickey’s data where a painting is connected to its features and these features are in turn shared with some other paintings. In the figure below, we can see that some features like trees are highly central (bigger text meaning more paintings having that feature). Conversely, features like “beach”, “ocean”, or “waves” are more peripheral on the upper left.

Bipartite network of features and paintings

For example, while the “Mountain-By-The-Sea” is described by the 10 features mentioned above, other paintings share some of these same features. There are 26 other paintings such as “Surf’s Up” and “High Tide” also contain “beach” features, and these beach-related paintings also have other features that they share in common with still other paintings. But “Mountain-By-The-Sea” is indirectly connected to “Mountain In and Oval” via “bushes”.

Performing a mathematical operation to convert the bipartite network into a more traditional network, we can connect these paintings directly to each other if they share a feature in common. In addition to having “bushes” in common, “Mountain-By-The-Sea” and “Mountain In and Oval” also have “clouds”, “cumulus”, “deciduous”, “mountain”, and “tree” in common. The mathematical operation assigns the relationship between “Mountain-By-The-Sea” and “Mountain In and Oval” a score of 6 because they share 6 features in common.

While Ross was a prolific painter, we well know that some features like trees occur more often than others. As a result, almost every painting is connected to every other painting by at least one feature, which isn’t terribly interesting to visualize as it would be a big “hairball.” Instead, we can take advantage of the fact that paintings have different levels of similarity to each other. We’ll use the SBV backbone algorithm developed by some of my network science colleagues at Northeastern University to extract only those relationships between paintings that are particiularly interesting and then visualize the resulting network.

Using an excellent network visualization tool called Gephi, I layed the network out and colored it below. Again, a painting (represented as a label below) is connected to another painting if they both share features in common. The SBV algorithm removed some links that should be present, but we can interpret the remaining links to be the “most important” links. I then used Gephi’s modularity function to discover “subcommunities” within this network that are highly similar to each other. These communities are represented by the six colors below.

  • Red These are ocean scenes which are not unusual in Ross’s repertoire — there are many paintings in here — but they share few features with other paintings in his oeuvre. Note that “Mountain by the Sea” is one of the three paintings that connect this group back to the rest.
  • Yellow These are the “waterfall” paintings.
  • Green These are the “mountain” paintings.
  • Purple These are the “snow” paintings.
  • Light blue These are the “homes” paintings.
  • Dark blue Something of a catchall.

Projected painting-to-painting network

We can also compute the “entropy” of all the features Ross used in a given season. Entropy is a way of measuring how disorganized a system is; in our case, how many times each feature appears in a season. If the same features we used exactly once, then the entropy would be 0. If different features were used different amounts, then the entropy would be high. Basically, entropy is another way of measuring how experimential or diverse Ross’s work was in a given season by how many different features he used.

The first two seasons have very low entropy, suggesting a very conservative approach. Season 3 and 4 see a drastic departure in style and much higher entropy (more experimentation) before falling again in Seasons 5 through 7 to a more conservative set of features again. Season 8 shows another surge in experimentation — different features being used in different amounts during the season. Season 13 shows some retrenchment back to more conservative habits followed again by a more experimental season. But in general, there’s a pattern of low entropy in the early years, a middle period of high levels of experimentation, followed in the later years by a consistent but moderate level of differentiation.

Entropy over time

The full code for my analysis of Hickey’s data is on GitHub.

The Need for Openness in Data Journalism

Do films that pass the Bechdel Test make more money for their producers? I’ve replicated Walt Hickey’s recent article in FiveThirtyEight to find out. My results confirm his own in part, but also find notable differences that point the need for clarification at a minimum. While I am far from the first to make this argument, this case is illustrative of a larger need for journalism and other data-driven enterprises to borrow from hard-won scientific practices of sharing data and code as well as supporting the review and revision of findings. This admittedly lengthy post is a critique of not only this particular case but also an attempt to work through what open data journalism could look like.

The Angle: Data Journalism should emulate the openness of science

New data-driven journalists such as FiveThirtyEight have faced criticism from many quarters and the critiques, particularly around the naïveté of assuming credentialed experts can be bowled over by quantitative analysis so easily as the terrifyingly innumerate pundits who infest our political media [1,2,3,4]. While I find these critiques persuasive, I depart from them here to instead argue that I have found this “new” brand of data journalism disappointing foremost because it wants to perform science without abiding by scientific norms.

The questions of demarcating what is or is not science are fraught, so let’s instead label my gripe a “failure to be open.” By openness, I don’t mean users commenting on articles or publishing whistleblowers’ documents. I mean “openness” more in the sense of “open source software” where the code is made freely available to everyone to inspect, copy, modify, and redistribute. But the principles of open-source software trace their roots more directly back norms in the scientific community that Robert Merton identified and came to known as “CUDOS” norms. It’s worth reviewing two of these norms because Punk-ass Data Journalism is very much on the lawn of Old Man Science and therein lie exciting possibilities for exciting adventures.

The first and last elements of Merton’s “CUDOS” norms merit special attention for our discussion of openness. Communalism is the norm that scientific results are shared and become part of a commons that others can build upon — this is the bit about “standing upon the shoulders of giants.” Skepticism is the norm that claims must be subject to organized scrutiny by community — which typically manifests as peer review. Both of these strongly motivated philosophies in the open source movement, and while they are practiced imperfectly in my experience within the social and information sciences (see my colleagues’ recent work on the “Parable of Google Flu“), I nevertheless think data journalists should strive to make them their own practice as well.

  1. Data journalists should be open in making their data and analysis available to all comers. This flies in the face of traditions and professional anxieties surrounding autonomy, scooping, and the protection of sources. But how can claims be evaluated as true unless they can be inspected? If I ask a data journalist for her data or code, is she bound by the same norms as a scientist to share it? Where and how should journalists share and document these code and data?
  2. Data journalists should be open in soliciting and publishing feedback. Sure, journalists are used to clearing their story with an editor, but have they solicited an expert’s evaluation of their claims? How willing are they to publish critiques of, commentary on, or revisions to their findings? If not, what are the venues for these discussions? How should a reporter or editor manage such a system?

The Guardian‘s DataBlog and ProPublica have each doing exemplary work in posting their datasets, code, and other tools for several years. Other organizations like the Sunlight Foundation develop outstanding tools to aid reporters and activists, the Knight Foundation has been funding exciting projects around journalism innovation for years, and the Data Journalism Handbook reviews other excellent cases as well. My former colleague, Professor Richard Gordon at Medill reminded me ideas around “computer assisted reporting” have been in circulation in the outer orbits of journalism for decades. For example, Philip Meyer has been (what we would now call) evangelizing since the 1970s for “precision journalism” in which journalists adopt the tools and methods of the social and behavioral sciences as well as its norms of sharing data and replicating research. Actually, if you stopped reading now and promised to read his 2011 Hedy Lamarr Lecture, I won’t even be mad.

The remainder of this post is an attempt demonstrate some ideas of what an “open collaboration” model for data journalism might look like. To that end, this article tries to do many things for many audiences which admittedly makes it hard for any single person to read. Let me try to sketch some of these out now and send you off in the right path.

  • First, I use an article Walt Hickey of FiveThirtyEight published on the relationship between the financial performance of films that the extent to which they grant their female characters substantive roles as a case to illustrate some pitfalls in both the practice and interpretation of statistical data. This is a story about having good questions, ambiguous models, wrong inferences, and exciting opportunities for investigation going forward. If you don’t care for code or statistics, you can start reading at “The Hook” below and stop after “The Clip” below.
  • Second, for those readers who are willing to pay what one might call the “Iron Price of Data Journalism”, I go “soup to nuts” and attempt to replicate Hickey’s findings. I document all the steps I took to crawl and analyze this data to illustrate the need for better documentation of analyses and methods. This level of documentation may be excessive or it may yet be insufficient for others to replicate my own findings. But providing this code and data may expose flaws in my technical style (almost certainly), shortcomings in my interpretations (likely), and errors in my data and modeling (hopefully not). I actively invite this feedback via email, tweets, comments, or pull requests and hope to learn from it. I wish new data journalism enterprises adopted the same openness and tentativeness in their empirical claims. You should start reading at “Start Your Kernels…”
  • Third, I want to experiment with styles for analyzing and narrating findings that make both available in the same document. The hope is that motivated users can find the detail and skimmers can learn something new or relevant while being confident they can come back and dive in deeper if they wish. Does it make sense to have the story up front and the analysis “below the fold” or to mix narrative with analysis? How much background should I presume or provide about different analytical techniques? How much time do I need to spend on tweaking a visualization? Are there better libraries or platforms for serving the needs of mixed audiences? This is a meta point as we’re in it now, but it’ll crop up in the conclusion.
  • Fourth, I want to experiment with technologies for supporting collaboration in data journalism by adopting best practices from open collaborations in free software, Wikipedia, and others. For example, this blog post is not written in a traditional content-management system like WordPress, but is an interactive “notebook” that you can download and execute the code to verify that it works. Furthermore, I’m also “hosting” this data on GitHub so that others can easily access the writeup, code, and data, to see how it’s changed over time (and has it ever…), and to suggest changes that I should incorporate. These can be frustrating tools with demoralizing learning curves, but these are incredibly powerful once apprenticed. Moreover, there are amazing resources and communities who exist to support newcomers and new tools are being released to flatten these learning curves. If data journalists joined data scientists and data analysts in sharing their work, it would contribute to an incredible knowledge commons of examples and cases that is lowering the bars for others who want to learn. This is also a meta point since it exists outside of this story, but I’ll also come back to it in the conclusion.

In this outro to a very unusual introduction, I want to thank Professor Gordon from above, Professor Deen FreelonNathan Matias, and Alex Leavitt for their invaluable feedback on earlier drafts of this… post? article? piece? notebook?

The Hook: The Bechdel Test article in FiveThirtyEight

Walk Hickey published an article on April 1 on FiveThirtyEight, titled The Dollar-And-Cents Case Against Hollywood’s Exclusion of Women. The article examines the relationship between movies’ finances and their portrayals of women using a well-known heuristic call the Bechdel test. The test has 3 simple requirements: a movie passes the Bechdel test if there are (1) two women in it, (2) who talk to each other, (3) about something besides a man.

Let me say at the outset, I like this article: It identifies a troubling problem, asks important questions, identifies appropriate data, and brings in relevant voices to speak to these issues. I should also include the disclaimer that I am not an expert in the area of empirical film studies likeDean Keith Simonton or Nick Redfern. I’ve invested a good amount time in criticizing the methods and findings of this article, but to Hickey’s credit, I also haven’t come across any scholarship that has attempted to quantify this relationship before: this is new knowledge about the world. Crucially, it speaks to empirical scholarship that has exposed how films with award-winning female roles are significantly less likely to win awards themselves [5], older women are less likely to win awards [6], actresses’ earnings peak 17 years earlier than actors’ earnings [7], and differences in how male and female critics rate films [8]. I have qualms about the methods and others may be justified in complaining it overlooks related scholarship like those I cited above, but this article is in the best traditions of journalism that focuses our attention on problems we should address as a society.

Hickey’s article makes two central claims:

  1. We found that the median budget of movies that passed the test…was substantially lower than the median budget of all films in the sample.
  2. We found evidence that films that feature meaningful interactions between women may in fact have a better return on investment, overall, than films that don’t.

I call Claim 1 the “Budgets Differ” finding and Claim 2 the “Earnings Differ” finding. The results, as they’re summarized here are relatively straightforward to test whether there’s an effect of Bechdel scores on earnings and budget controlling for other explanatory variables.

But before I even get to running the numbers, I want to examine the claims Hickey made in the article. The interpretations he makes about the return on investment are particularly problematic interpretations of basic statistics. Hickey reports the following findings from his models (emphasis added).

We did a statistical analysis of films to test two claims: first, that films that pass the Bechdel test — featuring women in stronger roles — see a lower return on investment, and second, that they see lower gross profits. We found no evidence to support either claim.

On the first test, we ran a regression to find out if passing the Bechdel test corresponded to lower return on investment. Controlling for the movie’s budget, which has a negative and significant relationship to a film’s return on investment,passing the Bechdel test had no effect on the film’s return on investment. In other words, adding women to a film’s cast didn’t hurt its investors’ returns, contrary to what Hollywood investors seem to believe.

The total median gross return on investment for a film that passed the Bechdel test was $2.68 for each dollar spent. The total median gross return on investment for films that failed was only $2.45 for each dollar spent.

…On the second test, we ran a regression to find out if passing the Bechdel test corresponded to having lower gross profits — domestic and international. Also controlling for the movie’s budget, which has a positive and significant relationship to a film’s gross profits, once again passing the Bechdel test did not have any effect on a film’s gross profits.

Both models (whatever their faults, and there are some as we will explore in the next section) apparently produce an estimate that the Bechdel test has no effect on a film’s financial performance. That is to say, the statistical test could not determine with a greater than 95% confidence that the correlation between these two variables was greater or less than 0. Because we cannot confidently rule out the possibility of there being zero effect, we cannot make any claims about its direction.

Hickey argues that passing the test “didn’t hurt its investors’ returns”, which is to say there was no significant negative relationship, but neither was there a significant positive relationship: The model provides no evidence of a positive correlation between Bechdel scores and financial performance. However, Hickey switches gears an in the conclusions, writes:

…our data demonstrates that films containing meaningful interactions between women do better at the box office than movies that don’t

I don’t know what analysis supports this interpretation. The analysis Hickey just performed, again taking the findings at their face, concluded that “passing the Bechdel test did not have any effect on a film’s gross profits” not “passing the Bechdel test increased the film’s profits.” While Bayesians will cavil about frequentist assumptions — as they are wont to do — and the absence of evidence is not evidence of absence, the “Results Differ finding” is not empirically supported in any appropriate interpretation of the analysis. The appropriate conclusion from Hickey’s analysis is “there no relationship between the Bechdel test and financial performance,” which he makes… then ignores.

What to make of this analysis? In the next section, I summarize the findings of my own analysis of the same data. In the subsequent sections, I attempt to replicate the findings of this article, and in so doing, highlight the perils of reporting statistical findings without adhering to scientific norms.

The Clip: Look in here for what to tweet

I tried to retrieve and re-analyze the data that Hickey described in his article, but came to some conclusions that were the same, others that were very different, and still others that I hope are new.

I was able to replicate some of his findings, but not others because specific decisions had to be made about the data or modeling that dramatically change the results of the statistical models. However, the article provides no specifics so we’re left to wonder when and where these findings hold, which points to the need for openness in sharing data and code. Specifically, while Hickey found that women’s representation in movies had no significant relationship on revenue, I found a positive and significant relationship.

But the questions and hypotheses Hickey posed about systematic biases in Hollywood were also the right ones. With a reanalysis using different methods as well as adding in new data, I found statistically significant differences in popular ratings also exist. These differences persist in the face of other potential explanations about differences arising because of genres, MPAA ratings, time, and other effects.

In the image below, we see that movies that have non-trivial women’s roles get 24% lower budgets, make 55% more revenue, get better reviews from critics, and face harsher criticism from IMDB users. Bars that are faded out mean my models are less confident about these findings being non-random while bars that are darker mean my models are more confident that this is a significant finding.

Movies passing the Bechdel test (the red bars):

  • …receive budgets that are 24% smaller
  • …make 55% more revenue
  • …are awarded 1.8 more Metacritic points by professional reviewers
  • …are awarded 0.12 fewer stars by IMDB’s amateur reviewers

 

Takeaway

Read the entire replication here.

 

Conclusions

The four main findings from this analysis of the effects of women’s roles in movies are summarized in the chart above. These four points point to a paradox in which movies that pass an embarrassingly low bar for female character development make more money and are rated more highly by critics, but have to deal with lower budgets and more critical community responses. Is this definitive evidence of active discrimination in the film industry and culture? No, but it suggests systemic prejudices are contributing to producers irrationally ignoring significant evidence that “feminist” films make them more money and earn higher praise.

The data that I used here was scraped from public-facing websites, but there may be reasons to think that these data are inaccurate by those who are more familiar with how they’re generated. Similarly, the models I used here are simple Stats 101 ordinary least squares regression models with some minor changes to account for categorical variables and skewed data. There are no Bayesian models, no cross-validation or bootstrapping, and no exotic machine learning methods here. But in making the data available (or at least the process for replicating how I obtained my own data), others are welcome to perform and share the results such analyses — and this is ultimately my goal of asking data journalism to adopt the norms of open collaboration. When other people take their methodological hammers or other datasets and still can’t break the finding, we have greater confidence that the finding is “real”.

But the length and technical complexity of this post also raise the question of, who is the audience for this kind of work? Journalistic norms emphasize quick summaries turned around rapidly with opaque discussions of methods and analysis and making definitive claims. Scientific norms emphasize more deliberative and transparent processes that prize abstruse discussions and tentative claims about their “truth”. I am certainly not saying that Hickey should have the output of regression models in his article — 99% of people won’t care to see that. But in the absence of soliciting peer reviews of this research, how are we as analysts, scientists, and journalists to evaluate the validity of the claims unless the code and data are made available for others to inspect? Even this is a higher bar than many scientific publications hold their authors to (and I’m certainly guilty of not doing more to make my own code and data available), but it should be the standard, especially for a genre of research like data journalism where the claims reach such large audiences.

However, there are exciting technologies for supporting this kind of open documentation and collaboration. I used an “open notebook” technology called IPython Notebook to write this post in such a way that the text, code, and figures I generated are all stitched together into one file. You’re likely reading this post on a website that lets you view any such Notebook on the web where other developers and researchers share code about how to do all manner of data analysis. Unfortunately, this was intended as a word processing or blogging tool, so the the lack of features such as more dynamic layout options or spell-checking will frustrate many journalists (apologies for the typos!). However, there are tools for customizing the CSS so that it plays well (see here and here). The code and data are hosted on GitHub, which is traditionally used for software collaboration, but its features for others to discuss problems in my analysis (issue tracker) or propose changes to my code (pull requests) promote critique, deliberation, and improvement. I have no idea how these will work in the context of a journalistic project, and to be honest, I’ve never used them before, but I’d love to try and see what breaks.

Realistically, practices only change if there are incentives to do so. Academic scientists aren’t awarded tenure on the basis of writing well-trafficed blogs or high-quality Wikipedia articles, they are promoted for publishing rigorous research in competitive, peer-reviewed outlets. Likewise, journalists aren’t promoted for providing meticulously-documented supplemental material or replicating other analyses instead of contributing to coverage of a major news event. Amidst contemporary anxieties about information overload as well as the weaponization of fear, uncertainty, and doubt tactics, data-driven journalism could serve a crucial role in empirically grounding our discussions of policies, economic trends, and social changes. But unless the new leaders set and enforce standards that emulate the scientific community’s norms, this data-driven journalism risks falling into traps that can undermine the public’s and scientific community’s trust.

This suggests several models going forward:

  • Open data. Data-driven journalists could share their code and data on open source repositories like GitHub for others to inspect, replicate, and extend. But as any data librarian will rush to tell you, there are non-trivial standards for ensuring the documentation, completeness, formatting, and non-proprietaryness of data.
  • Open collaboration. Journalists could collaborate with scientists and analysts to pose questions that they jointly analyze and then write up as articles or features as well as submitting for academic peer review. But peer review takes time and publishing results in advance of this review, even working with credentialed experts, doesn’t imply their reliability.
  • Open deliberation. Organizations that practice data-driven journalism (to the extent this is different from other flavors of journalism) should invite and provide empirical critiques of their analyses and findings. Making well-documented data available or finding the right experts to collaborate with are extremely time-intensive, but if you’re going to publish original empirical research, you should accept and respond to legitimate critiques.
  • Data omsbudsmen. Data-driven news organizations might consider appointing independent advocates to represent public interests and promote scientific norms of communalism, skepticism, and empirical rigor. Such a position would serve as a check against authors making sloppy claims, using improper methods, analyzing proprietary data, or acting for their personal benefit.

I have very much enjoyed thinking through many of these larger issues and confronting the challenges of the critiques I’ve raised. I look forward to your feedback and I very hope this drives conversations about what kind of science data-driven journalism hopes to become.

 

Checklist for Reviewing (And Thus Writing) A Research Paper

The instructions given to the program committee members for GROUP 2014 are unusually detailed and well-organized. I think they provide a great jumping-off point for scholars to reflect on how they review as well as write papers. Kudos to Program Co-Chairs David McDonald and Pernelle Bjorn for writing these up — reviewers and authors everywhere should take notice and step up their game!

1: Briefly SUMMARIZE the main points of the submission:

2: Describe the IMPORTANCE OF THE WORK:
-Is it new, original and/or innovative?
-Do they make a case in the paper itself for its originality (relationship to previous work)?
-Is it a useful, relevant and/or significant problem?
-Do they sufficiently motivate the problem?
-Is the work done on the problem useful, relevant and/or significant?
-Did you learn anything?
-Is this submission appropriate for the GROUP conference?

3: Describe the CLARITY OF GOALS, METHODS AND CREDIBILITY OF RESULTS:
-Are the goal(s) clear?
-Do they clearly describe what was done and/or how it was studied?
-Are the method(s) and/or analysis used to achieve the goal appropriate and used correctly?
-Do they provide sufficient data and/or well-supported arguments?
-Do the results and discussion follow from the method and/or argumentation? Are they believable?
-Does the paper cover all the important issues at the appropriate level of detail?
-Do they cite relevant work?
-Is there sufficient detail so that another researcher can replicate (more or less) the work?

4: Describe the QUALITY OF THE WRITING, addressing the following:
-Is the writing clear and concise?
-Is the paper appropriately focused?
-Is the paper well-organized
-Do they provide the right level of detail?
-Will the paper be understandable to the GROUP audience, including international readers?
-Are figures clear?
-Any figures needed/not needed?

5: Provide any OTHER COMMENTS you believe would be useful to the author (including pointers to missing relevant work).

6: SUMMARIZE your ASSESSMENT of the paper, pointing out which aspects described above you weighted most heavily in your rating.

HGSE Guest Lecture

Many thanks to Professor Karen Brennan for inviting me to speak to her class “Teacher Learning and Technology” about “Data“. I’ve posted the very brief slides I used here. Certainly nothing new under the sun, but hopefully above the median level of discourse about “big data.”

The ensuing discussion with the students was excellent and we covered a broad number of topics such as surveillance and privacy, mixed methods research, reducing the learning curve for analysis tools, and the importance of formulating research questions.

 

Does Wikipedia editing activity forecast Oscar wins?

The Academy Awards just concluded and much will be said about Ellen Degeneres most retweeted tweet (my coauthors and I have posted an analysis here that shows these “shared media” or “livetweeting” events disproportionately award attention to already elite users on Twitter.) I thought I’d use the time to try to debug some code I’m using to retrieve editing activity information from Wikipedia.

A naive but simple theory I wanted to test was whether editing activity could reliably forecast Oscar wins. Academy Awards are selected from approximately 6,000 ballots and the process is known for intensive lobbying campaigns to sway voters as well as tapping into the zeitgeist about larger social and cultural issues.

I assume that some of this lobbying and zeitgeist activity would both manifest in the aggregate in edits to the English Wikipedia articles about the nominees. In particular, I measure two quantities: (1) the changes (revisions) made to the article and (2) the number of new editors making revisions. The hypothesis is simply that articles about nominees with the most revisions and the most new editors should win. I look specifically at the time between announcement of the nominees in early January and March 1 (an arbitrary cutoff).

I’ve only run the analysis on the nominees for Best Picture, Best Director, Best Actor, Best Actress, and Best Supporting Actress (nominees in Best Supporting Actor was throwing some unusual errors, but I’ll update). The results below show that Wikipedia editing activity forecast the wins in Best Actor, Best Actress, and Best Supporting Actress, but did not do so for Best Picture or Best Director. This is certainly better than chance and I look forward to expanding the analysis to other categories and prior years.

Best Picture

The “Wolf of Wall Street” showed a remarkably strong growth in the number of new edits and editors after January 1. However, “12 Years a Slave” which ranked 5th by the end, actually won the award. A big miss.

picture_editors picture_edits 

Best Director

The Wikipedia activity here showed strong growth for  Steve McQueen (“12 Years a Slave”), but Alfonso Cuaron (“Gravity”) took the award despite coming in 4th in both metrics here. Another big miss.

director_editors director_edits

Best Actor

The Wikipedia activity for new edits and new editors are highly correlated because new editors necessarily show up as new edits. However, we see an interesting and very close race here between Chiwetel Ejofer (“12 Years a Slave”) and Matthew McConaughey (“Dallas Buyers Club”) for edits, but McConaughey with a stronger leader among new editors. This suggest older editors were responsible for pushing Ejofer higher (and he was leading early on), but McConaughey took the lead and ultimately won. Wikipedia won this one.

actor_usersactor_edits

Best Actress

Poor Judy Dench, she appeared to not even be in the running in either metric. Wikipedia activity forecast a Cate Blanchett (“Blue Jasmine”) win, although this appeared to be close among several candidates if the construct is to be believed. Wikipedia won this one.

actress_editors actress_edits

Best Supporting Actress

Lupita Nyong’o (“12 Years a Slave”) accumulated a huge lead over her other nominees by Wikipedia activity and won the award.

supactress_edits

supactress_editors 

Other Categories and Future Work

I wasn’t able to run the analysis for Supporting Actor because the Wikipedia API seemed to poop out on Bradley Cooper queries, but it may be a deeper bug in my code too. This analysis can certainly be extended to the “non-marquee” nominee categories as well, but I didn’t feel like typing that much.

I will extend and expand this analysis for both other categories as well as prior years’ awards to see if there are any discernible patterns for forecasting. There may be considerable variance between categories in the reliability of this simple heuristic — Director and Picture may be more politicized than the rest, if I wanted to defend my simplistic model. This type of approach might also be used to compare different awards shows to see if some diverge more than others from aggregate Wikipedia preferences. The hypothesis here is a simple descriptive heuristic and more extensive statistical models that incorporate features such as revenue, critics’ scores, and nominees’ award histories (“momentum”) may produce more reliable results as well.

Conclusion

Wikipedia editing activity over the two months leading up to the 2014 Academy Awards accurately forecast the winners of the Best Actor, Best Actress, and Best Supporting Actress categories but significantly missed the winners of the Best Picture and Best Director categories. These results suggest that differences in editing behavior–in some cases–may reflect collective attention to and aggregate preferences for some nominees over others. Because Wikipedia is a major clearinghouse for individuals who both seek and shape popular perceptions,  these behavioral traces may have significant implications for forecasting other types of popular preference aggregation such as elections.

Introduction to Zotero

I will be giving a talk on February 25 for members of the Northeastern University Lab for Texts, Maps, and Networks on how to use the excellent and open-source reference management tool, Zotero. This is a great tool for managing all the citations you need to keep track of for projects, organizing all those papers you should be reading, syncing changes to the cloud, and creating resources for sharing references with collaborators. And it’s much less evil than EndNote or Mendeley :)

Details on the talk time and location are here, but if you’re too busy to follow the link, the talk will be from 12:00 – 1:30pm in 400B Holmes Hall (map). I’m not sure if there will be a video recording, but if there is, I’ll post the link when it becomes available.

Slides I’ll be using are available here based on an existing tutorial and the excellent documentation.

Harvard CRCS talk

Today, I gave a talk in the lunch seminar series at Harvard’s Center for Research on Computation and Society (CRCS) about my dissertation-related work on socio-technical trajectories. Slides are here and the talk was recorded — I’ll post a link to that as soon as it’s available since I need to take some notes on some excellent ideas audience members had about how to extend this research!

EDIT: Link to the video of the presentation.