AI systems are increasingly used in sensitive decision pipelines. How should we ensure that these systems are designed and evaluated in ways that, when deployed in complex and dynamic social settings, they appropriately promote our varied values? As part of this work, I have characterized the nature and sources of harmful biases in AI-informed decisions, highlighted the fundamental epistemic and ethical shortcomings of existing approaches for discovering and managing these biases, and developed alternative ethical and computational methodologies that better address our concerns. A key implication of my work is that we cannot properly understand and anticipate potential fault modes of AI-informed decisions through a myopic and static focus on algorithmic outputs. We must adopt a sociotechnical lens that is grounded in an understanding of the complex organizational and social contexts in which algorithms are developed and embedded. My current research here is focused on algorithmic fairness in dynamic and interactive decision settings, human-AI teaming, and diversity in algorithm design.
- Diversity in sociotechnical machine learning systems (with Maria De-Arteaga). 2022. Big Data & Society.
- Homophily and incentive effects in use of algorithms (with Riccardo Fogliato, Shantanu Gupta, Zachary Chase Lipton & David Danks). CogSci 2022
- Justice in misinformation detection systems: an analysis of algorithms, stakeholders, and potential harms (with Terrence Neumann and Maria De-Arteaga). Forthcoming. FAccT 2022.
- Algorithmic fairness & the dynamics of justice (with David Danks and Zack Lipton). 2021. Canadian Journal of Philosophy as part of the special issue on “The Political Philosophy of Data and AI”.
- Algorithmic bias: senses, sources, solutions (with David Danks). 2021. Philosophy Compass.
- Fair machine learning under partial compliance (with Jessica Dai & Zack Lipton). 2021. Proceedings for AIES 2021.
- Algorithmic fairness from a non-ideal perspective (with Zack Lipton). 2020. Proceedings for AIES 2020.