About Me
Emma Lurie
As a public interest technologist and interdisciplinary researcher, I focus on the intersections of platforms, democracy, and law. My research and advocacy work aims to shape a more equitable and accountable technology ecosystem through evidence-based policymaking and the development of responsible technology. Currently, I am pursuing a JD at Stanford Law School and a PhD at the UC Berkeley School of Information, with an expected graduation date of Summer 2025 for both degrees.
Previously, I've had the opportunity to work with some amazing teams at the Knight First Amendment Institute, CISA, Stanford Internet Observatory, ACLU-NorCal, Plaintext Group, Wellesley College Cred Lab, MIT Election Data and Science Lab, and the U.S. Census Bureau. I have a BA in Computer Science and Chinese Language & Culture from Wellesley College.
Research
My dissertation research examines:
- Platform governance of election information: Investigating how search engine algorithms and content moderation policies affect the visibility and credibility of election-related information, and their impact on voter trust and democratic processes.
- Judicial interpretations of technology-mediated harms: Analyzing how courts frame and interpret doxxing cases, exploring the legal system's response to online privacy violations and harassment.
- Evolution of Section 230 jurisprudence: Studying the changing interpretations of Section 230 of the Communications Decency Act through computational legal analysis of court decisions, focusing on metaphors used to conceptualize the Internet and online platforms.
Other Research:
-
Algorithmic Auditing: Developing and applying methodologies to assess the fairness, accountability, and transparency of algorithmic systems. This work involves:
- Conducting sociotechnical audits of search engines and their impact on information access and democratic processes.
- Exploring the challenges and opportunities in auditing AI systems in various domains, particularly in criminal justice contexts.
-
AI Ethics: Investigating the ethical implications of AI systems and their deployment in society. Key focus areas include:
- Examining the intersection of AI, diversity, and inclusion, with a particular emphasis on reconfiguring these concepts for AI ethics frameworks.
- Analyzing the ethical considerations in AI-driven content moderation and fact-checking systems.
-
Computer Science Research Methods and Best Practices: Exploring the ethical dimensions of computer science research practices. This research encompasses:
- Investigating the ethical implications of crowdsourced research methodologies, particularly in the context of AI and machine learning studies.
- Examining the responsibilities of researchers in designing and conducting studies that use platform data.
Publications
Peer Reviewed Papers
-
Search quality complaints and imaginary repair: Control in articulations of Google Search.
Daniel Griffin* and Emma Lurie*.
New Media & Society 2022. -
Reconfiguring Diversity and Inclusion for AI Ethics.
Nichole Chi, Emma Lurie, and Deirdre K. Mulligan.
AIES 2021. -
'Highly Partisan' and 'Blatantly Wrong': Analyzing News Publishers' Critiques of Google's Reviewed Claims.
Emma Lurie and Eni Mustafaraj.
Truth and Trust Conference 2020. -
The Case for Voter-Centered Audits of Search Engines During Political Elections.
Eni Mustafaraj, Emma Lurie, and Claire Devine.
ACM FAccT 2020. -
Opening Up the Black Box: Auditing Google's Top Stories Algorithm.
Emma Lurie and Eni Mustafaraj.
AAAI FLAIRS 2019. -
Investigating the Effects of Google's Search Engine Result Page in Evaluating the Credibility of Online News Sources.
Emma Lurie and Eni Mustafaraj.
WebSci 2018.
Preprints
-
Searching for Representation: A Sociotechnical Audit of Googling for Members of Congress.
Emma Lurie and Deirdre K. Mulligan. -
Google Says So(S): An Examination of the Entanglement of Search Engines and Information on Ballot Propositions.
Emma Lurie.
Non-Archival Publications
-
Who needs imagination? Exploring legal professionals' lack of curiosity about e-discovery tools.
Emma Lurie and Deirdre K. Mulligan.
Designing Technological Systems with the Algorithmic Imaginations of Those Who Labor Workshop at CHI'21. -
Crowdworkers Are Not Judges: Rethinking Crowdsourced Vignette Studies as a Risk Assessment Evaluation Technique.
Emma Lurie and Deirdre K. Mulligan.
Fair and Responsible AI Workshop at CHI'20. -
Investigating Causal Effects of Instructions in Crowdsourced Claim Matching.
Emma Lurie, Lucy Li, Sofia Dewar, Masha Belyi, Daniel Rincón, John Baldwin, and Rajvardhan Oak.
Computation + Journalism 2020. -
Considering Contestability in Automated Fact-Checking Systems.
Emma Lurie.
Contestability Workshop at CSCW 2019. -
The Challenges of Algorithmically Assigning Fact-checks: A Sociotechnical Examination of Google's Reviewed Claims.
Emma Lurie.
Undergraduate Thesis, 2019. -
How the Interplay of Google and Wikipedia Affects Perceptions of Online News Sources.
Annabel Rothschild, Emma Lurie, and Eni Mustafaraj.
Computation + Journalism 2019.
Other Writing
-
JD/PhD Advice for Prospective Students.
Emma Lurie.
Medium. September 2024. -
Comparing Platform Research API Requirements.
Emma Lurie.
Tech Policy Press. March 2023. -
TikTok just announced the data it's willing to share. What's missing?
Emma Lurie.
Stanford Internet Observatory Blog. February 2023. -
Tips for Working with an Undergraduate Research Advisor.
Emma Lurie.
Medium. June 2021. -
What is "good enough" for automated fact-checking?
Emma Lurie.
Towards Data Science. August 2019. -
Googling for the Kansas Primary.
Emma Lurie.
Medium. August 2018. -
Why Google Isn't Always Right.
Emma Lurie.
The Spoke. October 2017.
Contact
Email: elurie [AT] stanford.edu
Twitter: @emma_lurie