My research is centered on the online information ecosystem. As research engineer, Brown collects and maintains large-scale collections of social media and digital trace data for the purposes of social science research. In her research endeavors, she studies cross-platform media manipulation, political bias in algorithmic systems, and the effect of platform governance and moderation policies on the spread of political content. My work spans Twitter disinformation campaigns by the Internet Research Agency, the YouTube recommendation algorithm, and how disinformation travels across platforms.



Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden

Journal of Online Trust and Safety

With James Bisbee, Angela Lai, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker

August 31, 2022

Skepticism about the outcome of the 2020 presidential election in the United States led to a historic attack on the Capitol on January 6th, 2021 and represents one of the greatest challenges to America's democratic institutions in over a century. Narratives of fraud and conspiracy theories proliferated over the fall of 2020, finding fertile ground across online social networks, although little is know about the extent and drivers of this spread. In this article, we show that users who were more skeptical of the election's legitimacy were more likely to be recommended content that featured narratives about the legitimacy of the election. Our findings underscore the tension between an "effective" recommendation system that provides users with the content they want, and a dangerous mechanism by which misinformation, disinformation, and conspiracies can find their way to those most likely to believe them.

Full Article | NBC News | The Verge | Tech Policy Press | Ars Technica | Poynter | Replication Materials

Twitter flagged Donald Trump’s tweets with election misinformation: They continued to spread both on and off the platform.

Harvard Kennedy School Misinformation Review

With Zeve Sanderson, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker

August 24, 2021

We analyze the spread of Donald Trump’s tweets that were flagged by Twitter using two intervention strategies—attaching a warning label and blocking engagement with the tweet entirely. We find that while blocking engagement on certain tweets limited their diffusion, messages we examined with warning labels spread further on Twitter than those without labels. Additionally, the messages that had been blocked on Twitter remained popular on Facebook, Instagram, and Reddit, being posted more often and garnering more visibility than messages that had either been labeled by Twitter or received no intervention at all. Taken together, our results emphasize the importance of considering content moderation at the ecosystem level.

Full Article | USA Today | Tech Policy Press | Popular Science | CNET | Replication Materials

SARS-CoV-2 titers in wastewater foreshadow dynamics and clinical presentation of new COVID-19 cases

Science of the Total Environment

With Fuqing Wu, Amy Xiao, Jianbo Zhang, Katya Moniz, Noriko Endo, Frederica Armas, Richard Bonneau, Mary Bushman, Peter R. Chai, Claire Duvallet, Timothy B. Erickson, Katelyn Foppe, Newsha Ghaeli, Xiaoqiong Gu, William P. Hanage, Katherine H. Huang, Wei Lin Lee, Mariana Matus, Kyle A. MacElroy, Jonathan Nagler, Steven T. Rhode, Mauricio Santillana, Joshua A. Tucker, Stefan Wuertz, Shijie Zhao, Janelle Thompson, and Eric J. Alm

Preprint: June 23, 2020

Published: January 20, 2022

Current estimates of COVID-19 prevalence are largely based on symptomatic, clinically diagnosed cases. The existence of a large number of undiagnosed infections hampers population-wide investigation of viral circulation. Here, we use longitudinal wastewater analysis to track SARS-CoV-2 dynamics in wastewater at a major urban wastewater treatment facility in Massachusetts, between early January and May 2020. SARS-CoV-2 was first detected in wastewater on March 3. Viral titers in wastewater increased exponentially from mid-March to mid-April, after which they began to decline. Viral titers in wastewater correlated with clinically diagnosed new COVID-19 cases, with the trends appearing 4-10 days earlier in wastewater than in clinical data. We inferred viral shedding dynamics by modeling wastewater viral titers as a convolution of back-dated new clinical cases with the viral shedding function of an individual. The inferred viral shedding function showed an early peak, likely before symptom onset and clinical diagnosis, consistent with emerging clinical and experimental evidence. Finally, we found that wastewater viral titers at the neighborhood level correlate better with demographic variables than with population size. This work suggests that longitudinal wastewater analysis can be used to identify trends in disease transmission in advance of clinical case reporting, and may shed light on infection characteristics that are difficult to capture in clinical investigations, such as early viral shedding dynamics.

Full Article | The New York Times

Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube during the 2016 U.S. Presidential Election

International Journal of Press and Politics

With Yevgeniy Golovchenko, Cody Buntain, Gregory Eady, and Joshua A. Tucker

April 19, 2020

This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies.

Full Article | Techstream | Medium

Public Writing

Republicans are increasingly sharing misinformation, research finds

The Washington Post | January 26, 2022

With Maggie Macdonald

Gender-based online violence spikes after prominent media attacks

Brookings TechStream | January 26, 2022

With Zeve Sanderson and Maria Alejandra Silva Ortega

Twitter banned Marjorie Taylor Greene. That may not hurt her much.

The Washington Post | January 14, 2022

With Maggie Macdonald

Trendless Fluctuation? How Twitter’s Ethiopia Interventions May (Not) Have Worked

Tech Policy Press | January 11, 2022

With Tessa Knight

Methods Supplement

Cross-posted in Slate and DFRLab Medium

Twitter amplifies conservative politicians. Is it because users mock them?

The Washington Post | October 27, 2021

With Jonathan Nagler and Joshua A. Tucker

Methods Supplement | Dataset

Additional Coverage: Rolling Stone | Salon

Twitter put warning labels on hundreds of thousands of tweets. Our research examined which worked best.

The Washington Post | December 9, 2020

With Zeve Sanderson, Jonathan Nagler, Richard Bonneau, and Joshua A. Tucker

Methods Supplement | Dataset

How Trump impacts harmful Twitter speech: A case study in three tweets

Brookings TechStream | October 22, 2020

With Zeve Sanderson

Biden and Sanders are debating tonight. What got Twitter users buzzing during past Democratic debates?

The Washington Post | March 15, 2020

With Zhanna Terechshenko, Niklas Loynes, Tom Paskhalis, and Jonathan Nagler

Working Papers

Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users

With James Bisbee, Angela Lai, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker

Estimating the Ideology of Political YouTube Videos

With James Bisbee, Angela Lai, Joshua A. Tucker, Jonathan Nagler, and Richard Bonneau

Network embedding methods for large networks in political science

With Zhanna Terechshenko, Rachel Connolly, Angela Lai, Tianxin Ji, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau

To Moderate, Or Not to Moderate: Strategic Domain Sharing by Congressional Campaigns

With Maggie Macdonald, Joshua A. Tucker, Richard Bonneau, and Jonathan Nagler

Checking the Checkers: How Ideology and Source Credibility is Related to Third-Party Fact-Checking on Facebook

With Zhanna Terechshenko, Kevin Aslett, Tom Paskhalis, Cody Buntain, Zeve Sanderson, Joshua A. Tucker, Richard Bonneau, and Jonathan Nagler

Using Language Embeddings with Synthetic Minority Oversampling Technique

With Zhanna Terechshenko, Joshua A. Tucker, Richard Bonneau, and Jonathan Nagler

Other Coverage

Open Source

Check out (or contribute to!) open source projects for collecting, analyzing, and modelling information about the online environment.

Open Source



a dataset of public interest exception tweets by politicians during the 2020 election period

This dataset contains the public interest exception labels for tweets by various politicians and political organizations during the 2020 election period. Tweets were labelled for whether they contained a "soft intervention," a "hard intervention," or "no intervention." For tweets that received an intervention, we report the intervention type, text, and URL.

GitHub | Analysis | Methods Supplement


a wrapper for the YouTube Data API

With Leon Yin

As the largest social media platform amongst American adults, YouTube is vital to understanding the online media ecosystem. This software package makes accessing YouTube data easier and faster with just a few lines of code.

PyPI | GitHub | Jupyter Notebook



a wrapper for huggingface transformer libraries

By Vishakh Padmakumar and Zhanna Terechshenko

Smaberta is a python wrapper for interacting with huggingface transformer models. Smaberta makes it easier to train, evaluate, predict, and finetune cutting-edge language models based on transformers.

PyPI | GitHub


a url expansion toolkit

By Leon Yin

urlExpander is inteded to be used by social media researchers who want to do analysis of links. Aside from collecting in-depth user engagement data, these services obfuscate the destination of the shortened URLs. urlExpander was created to address this challenge in a scalable and robust manner. It does so by providing utility functions to convert Tweets into link datasets, filter for known for link-shortening services (like, resolve shortened links, and parse the title and meta description from webpages. urlExpander and offers multithreaded url expansion. The multithreaded url expansion was created to overcome the bottleneck of mass link expansion through parallelization, minimizating http requests, caching results, and chunking the input into smaller pieces.

PyPI | GitHub


Text Classification Using a Transformer-Based Model

With Zhanna Terechshenko and Vishakh Padmakumar | December 8, 2020

How to use the YouTube Data API: YoutubeDataApi

October 13, 2021

Using SMaBERTa for text classification

October 13, 2021