Values & AI-informed decision-making

AI systems are increasingly used in sensitive decision pipelines. How should we ensure that these systems are designed and evaluated in ways that, when deployed in complex and dynamic social settings, they appropriately promote our varied values? As part of this work, I have characterized the nature and sources of harmful biases in AI-informed decisions, highlighted the fundamental epistemic and ethical shortcomings of existing approaches for discovering and managing these biases, and developed alternative ethical and computational methodologies that better address our concerns. A key implication of my work is that we cannot properly understand and anticipate potential fault modes of AI-informed decisions through a myopic and static focus on algorithmic outputs. We must adopt a sociotechnical lens that is grounded in an understanding of the complex organizational and social contexts in which algorithms are developed and embedded. My current research here is focused on algorithmic fairness in dynamic and interactive decision settings, human-AI teaming, and diversity in algorithm design.

Sample publications