In February 2014 Twitter announced a “Data Grants” pilot project to “give a handful of research institutions access to our public and historical data” because “it has been challenging for researchers outside the company who are tackling big questions to collaborate with us.” By April 2014, Twitter announced 6 winners from more than 1,300 applications. None of the selected projects addressed the accumulating research about how Twitter-the-platform enabled abuse, spam, and manipulation. For its efforts in 2014, Twitter-the-organization earned a lot of glowing press coverage about opening its data to researchers. I was more guarded in my assessment of Twitter’s “pivot” to more open data sharing with researchers and offered some suggestions how Twitter could do better to share its data with the research community. It’s been four years since I wrote about those “data-driven dreams”.

There were no subsequent data grants awarded in 2014. Google Scholar records 18 scholarly contributions referencing “Twitter Data Grant” with a total of 37 citations. The Twitter Data Grant program was not renewed in 2015. Or 2016. Or 2017. In the 49 months since Twitter’s commitment to “collaborating with academia and the external research community”, there is only this single 2014 project. There is a Twitter account @datagrants with a single tweet announcing the 2014 program linking and a profile link to a non-existent research.twitter.com. The official @research account went 27 months between March 2015 and June 2017 without a single tweet. I could think of at least one major event unfolding in this time window that should have warranted some research attention.

In the 49 months since the announcement of the Twitter Data Grants project and today, I have been an author, attendee, senior program committee member, and co-organizer at different alphabet soup conferences around social computing, social data mining, and computational social science like ACM CSCW, AAAI ICWSM, ACM CHI, or IC2S2. I’ve co-authored a research paper on misinformation and fact-checking on Twitter, a report on how Twitter handles harassment, and peer-reviewed zillions of submissions about the social ills of abuse, spam, and manipulation enabled by Twitter. I have one of the highest Kardashian indices among social computing researchers, the implication of such a dubious distinction being there is a vibrant community out there doing more and better research around the uses and abuses of Twitter than I have.

But Twitter’s absence from submitting, attending, or funding these research communities examining the intersection of social behavior and technological affordances has been glaringly conspicuous. Twitter has been a specter in these communities since, well, forever: its users’ data regularly makes apparitions in research presentations, but there was never anyone from Twitter with which to shake hands, exchange ideas, or to apply to work. Companies like Facebook, Google, Microsoft, IBM, and Yahoo are very far from perfect in their strategy, products, and practices, but each of these organizations employs researchers engaged with these social computing communities. The theories, methods, and values that bind these fractious interdisciplinary research communities together are represented by staff employed across the data science, user experience, research, safety, product, and engineering teams. I can interface these companies’ researchers because these companies have repeatedly committed to funding these conferences, participating in the program committees, and hiring researchers from these communities. But I can’t name a single product, UX, or data science member at Twitter who has authored, reviewed, or attended the major research conferences I named above. The company has no substantive relationships with the research communities core to its mission: “give everyone the power to create and share ideas and information instantly, without barriers.”

Much to my surprise Thursday morning, almost exactly 49 months after the previous “Twitter Data Grants” effort, Twitter CEO @jack announced a Request For Proposals for “great ideas and implementations” to deal with “abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers.” That’s an impressive list of self-admitted turpitude-as-a-service, none of which is new, surprising, or insurmountable if Twitter engineers, product managers, and data scientists had substantive relationships for engaging with the research communities that had been publishing warnings since 2009 about the behavior its platform enables. This announcement does come as a surprise to many of my colleagues and collaborators who have been doing what amounts to Twitter’s product, experience, and safety research for them for years, but Twitter’s new RFP only references work done by former Twitter employees at Cortico.

This new “Twitter Health Metrics” proposal has many of the same trappings of the “Twitter Data Grants” from 49 months ago. “We are looking to partner with outside experts to help us identify how we measure the health of Twitter.” Elevated data access would certainly accelerate some research, but for many of the problems Twitter enumerates, the research community has already converged on potential solutions – but Twitter has been inconsistent in making its engineers, product managers, and data scientists available to the research community to implement and evaluate these solutions. Why not hire researchers to synthesize the firehose of relevant research into a report? Why not impanel an advisory board of outside experts? Why not organize a listening tour for researchers to give Twitter feedback? What research enumerating the problems Twitter-the-platform faces has Twitter-the-organization consumed? What specific resources in terms of data, money, engineering, etc. will be allocated to this research effort?

There are potential solutions to Twitter-the-platform’s turpitude-as-a-service, but they would require fundamental cultural shifts in how Twitter-the-organization engages with the research community to engineer its products. This new Twitter Health Metrics initiative offers little in the way of credible organizational commitments beyond what appears to be an abandoned four-year-old model for granting a handful of researchers “public data access.” What would be examples of more credible commitment to addressing issues around the health of Twitter conservations and their consequences?

First, Twitter could commit to engaging and investing in the research communities that have been analyzing issues of abuse, harassment, trolling, manipulation, misinformation, and echo chambers for years already. There are an increasing number of “unicorn” researchers who combine social science expertise with big data skills at conferences like ICWSM, CSCW, CHI, IC2S2, WebSci, SocInfo, ICA, SPSP, APSA, and AoIR. Google, Facebook, and Microsoft all have a record of co-sponsoring conferences, incentivizing employees to serve on program committees and workshops, creating internship and fellowship programs to bring in new energy and ideas from graduate students, and organizing research retreats and visiting researcher programs to engage academics, journalists, and artists. Twitter’s RFP pre-supposed a research design around “capturing, measuring, evaluating, and reporting” on health metrics. Rather than inviting researchers to slot into its assumptions about how to improve conversational health, Twitter-the-organization could begin by sending product, engineering, and analytics staff to attend upcoming research workshops like “Understanding Bad Actors Online” at CHI 2018 in April, the MisInfoWeb track at The Web Conference in April, or the ICWSM 2018 conference at Stanford in June to learn about the approaches that already exist.

Second, Twitter could impanel a committee of interdisciplinary experts (a few names that come to mind include Amy Bruckman, Jordan Boyd-Graber, Ceren Budak, Meeyoung Cha, Soraya Chemaly, Munmun de Choudhury, Meredith Clark, Kate Crawford, Cristian Danescu-Niculescu-Mizil, Nick Diakopoulos, Jana Diesner, Jill Dimond, Emilio Ferrara, Deen Freelon, David Garcia, Eric Gilbert, Bernie Hogan, Phil Howard, David Jurgens, Karrie Karahalios, Daniel Kreiss, David Lazer, Cliff Lampe, Kristina Lerman, Andrew Losowsky, Drew Margolin, Nathan Matias, Shannon McGregor, Yelena Mejova, Filippo Menczer, Takis Metaxas, Tanushree Mitra, Mor Naaman, Safiya Noble, Jurgen Pfeffer, Sarah Roberts, Daniel Romero, Derek Ruths, Emma Spiro, Kate Starbird, Markus Strohmaier, Svitlana Volkova, Claudia Wagner, Ingmar Weber, Brooke Foucault Welles, and Christo Wilson) to evaluate the Twitter Health Metrics proposals for merit and impact that the previous Twitter Data Grants project missed. Empirical research around abuse, harassment, trolls, manipulation, misinformation, and echo chambers on Twitter-the-platform has been published since 2009, so the bar for novelty is likely higher than Twitter-the-organization expects given its absence from these research conversations. An announcement that it had assembled an outside panel of researchers to review the proposals would be a more credible signal that Twitter-the-organization was not going to prioritize the comfortable dead ends of brittle machine learning classifiers or feature engineering every problem into easy-to-hammer nails if better ideas came in from the research community.

Third, Twitter-the-organization could signal it is committed to solving this problem by creating teams staffed by socio-technical and social researchers backed by real product, engineering, and analytics support. In addition to the researchers named above who have been doing this research, there’s a glut of outstanding social computing scholars already or soon-to-be on the job market with specific expertise around examining online misbehavior and online community governance like Nazanin Andalibi, Lindsay Blackwell, Sophie Chou, Stevie Chancellor, Motahhare Eslami, Stuart Geiger, Oliver Haimson, Aniko Hannak, Amy Johnson, Jenny Korn, Katherine Lo, Alexandra Olteanu, Niloufar Salehi, Elizabeth Whittaker, or Amy Zhang. If Twitter isn’t at social computing conferences, it is definitely not at social science conferences like ICA, ASA, SPSP, APSA, AAPOR, AAG, or AoIR. You wouldn’t ask a psychologist to build a database for streaming data, so why are you asking software engineers to build for deindividuation, structuration, or motivated reasoning? Although there are social scientists tucked away at Twitter, from where I sit it has the poorest record among the major social platforms of hiring credentialed social scientists to guide decisions about affordances, incentives, governance, bias, evaluation, ethics, and harm. Whatever gaps in technical skills that exist, it’s an open secret in Silicon Valley that it takes approximately 10 weeks in a coding bootcamp to train novices on the fundamentals: imagine what could be possible with subject-matter experts. An announcement that Twitter-the-organization was (1) creating a new social research team dedicated to these issues, (2) to be staffed from social and socio-technical research communities, (3) by recruiting at from these conferences would be more credible signals that Twitter-the-organization was committed to improving its conversational health by bringing in subject-matter experts.

I want to see Twitter-the-organization and Twitter-the-platform succeed in the face of “abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers.” Twitter-the-platform plays a crucial role in our information society for connecting people around the world into something like a public sphere. The “Twitter Health Metrics” RFP is a promising signal that the organization is opening up to substantive and sustained engagement with the research community. But members of research community like myself are justified in feeling that this request for proposals does not go far enough given the years of effort scientists have already poured into doing Twitter’s product, experience, and safety research for it with negligible support or acknowledgment. It is very late in the game for Twitter to acknowledge the essential role social research plays in sustaining social platforms: this mea-culpa-cum-RFP does not even concede this uncontroversial point. Twitter-the-organization can and should do more to improve its credibility that it is ready to listen and act on the “conversational health” research the social and socio-technical communities have been doing all along.

EDIT (6 March 2018): I’ve updated the post to include additional names.

EDIT (7 March 2018): Twitter is hiring a Director of Social Science Research.