The following schedule is tentative and subject to change.

Date Topic Readings Assignments & Project
Jan 8 Introduction & Course overview
Jan 15 No class (Martin Luther King Day)
Jan 22 Fundamentals of Human Cognition and Artificial Intelligence

Lecture

Optional

Evans, Heuristic and Analytic Processes in Reasoning. British Journal of Psychology 1984

Kahneman and Tversky, Judgment under Uncertainty: Heuristics and Biases. Science 1974

Chapter 3: Mental Models and User Models. Handbook of Human-Computer Interaction

Russell and Norvig. Artificial Intelligence: A Modern Approach, 4th Edition. 2020

Jan 29 Human-Centered Design

Lecture

Optional

Rogers, Sharp, and Preece. Interaction Design: Beyond Human-Computer Interaction. 2015

Norman. The Design of Everyday Things: Revised and Expanded Edition. 2013

Martin. Doing Psychology Experiments. 2007

Assignment: Release
Feb 5 Design Principles and Guidelines for Human-AI Interaction

Lecture

Required

Amershi et al. Guidelines for Human-AI Interaction. CHI'19

Optional

Horvitz. Principles of Mixed-Initiative User Interfaces. CHI'99

Shneiderman. Human-Centered AI. Oxford University Press, 2022

Rusell. Human Compatible: Artificial Intelligence and the Problem of Control. Penguin Random House, 2019

Feb 12 Explainable AI: Definitions, Methods, and Human-Centered Evaluations

Leading presenters: Tania, Hasan, Fu-Chia

Required

Ribeiro et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD'16

Wang and Yin. Effects of Explanations in AI-Assisted Decision Making: Principles and Comparisons. ACM Transactions on Interactive Intelligent Systems, 2022

Cheng et al. Explaining Decision-Making Algorithms through UI: Strategies to Help Non-Expert Stakeholders. CHI'19

Optional

Lakkaraju et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction. KDD'16 (Yunze)

Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 2019 (Akshay)

Guidotti et al. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, August 2018 (Tongyan)

Molnar. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2020

Liao et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. CHI'20 (Mohan)

Yang et al. How Do Visual Explanations Foster End Users' Appropriate Trust in Machine Learning?. IUI'20 (Dominic)

Bucinca et al. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. IUI'20 (Pangpang)

Poursabzi-Sangdeh et al. Manipulating and Measuring Model Interpretability. CHI'21 (Hairong)

Bansal et al. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. CHI'21 (Daniel Castro Mesa)

Vasconcelos et al. Explanations Can Reduce Overreliance on AI Systems During Decision-Making. CSCW'23 (Nishtha)

Chen et al. Machine Explanations and Human Understanding. FAccT'23 (Jiaxin)

Feb 19 Explainable AI: Intervention Designs

Leading presenters: Akshay, Daniel Castro Mesa

Required

Abdul et al. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. CHI'20

Lai et al. Selective Explanations: Leveraging Human Input to Align Explainable AI. CSCW'23

Optional

Wang et al. Designing Theory-Driven User-Centric Explainable AI. CHI'19 (Keerthana)

Ehsan et al. Expanding Explainability: Towards Social Transparency in AI systems. CHI'21 (Arjun)

Fel et al. Harmonizing the object recognition strategies of deep neural networks with humans. NeurIPS'22 (Yunsheng)

Nguyen et al. Visual correspondence-based explanations improve AI robustness and human-AI team accuracy. NeurIPS'22 (Daniel Chen)

Zhang and Lim. Towards Relatable Explainable AI with the Perceptual Process. CHI'22 (Aline)

Gajos and Mamykina. Do People Engage Cognitively with AI? Impact of AI Assistance on Incidental Learning. IUI'22 (Fu-Chia)

Danry et al. Don't Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanations as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations. CHI'23 (Harry)

Slack et al. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. AIES'20 (Raymond)

Slack et al. Explaining machine learning models with interactive natural language conversations using TalkToModel. Nature Machine Intelligence, 2023 (Tania)

Miller. Explainable AI is Dead, Long Live Explainable AI!: Hypothesis-driven Decision Support using Evaluative AI. FAccT'23 (Hasan)

Project: Proposal due (by EOD, Feb 21)
Feb 26 Trust and Reliance on AI: Empirical Studies and Computational Models

Leading presenters: Daniel Chen, Nishtha, Tongyan

Required

Rechkemmer and Yin. When Confidence Meets Accuracy: Exploring the Effects of Multiple Performance Indicators on Trust in Machine Learning Models. CHI'22

He et al. Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems. CHI'23

Tejeda et al. AI-Assisted Decision-making: a Cognitive Modeling Approach to Infer Latent Reliance Strategies. Computational Brain & Behavior, 2022

Optional

Bansal et al. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. HCOMP'19 (Yunze)

Bansal et al. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. AAAI'19 (Harry)

Zhang et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. FAT*'20 (Fu-Chia)

Nourani et al. The Role of Domain Expertise in User Trust and the Impact of First Impressions with Intelligent Systems. HCOMP'20 (Dominic)

Lu and Yin. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. CHI'21 (Jiaxin)

Guo and Yang. Modeling and Predicting Trust Dynamics in Human-Robot Teaming: A Bayesian Inference Approach. International Journal of Social Robotics, 2021 (Hairong)

Azevedo-Sa et al. Real-Time Estimation of Drivers' Trust in Automated Driving Systems. International Journal of Social Robotics, 2021 (Yunsheng)

Wang et al. Will You Accept the AI Recommendation? Predicting Human Behavior in AI-Assisted Decision Making. WWW'22 (Tania)

Papenmeier et al. It's Complicated: The Relationship between User Trust, Model Accuracy and Explanations in AI. ACM Transactions on Computer-Human Interaction, 2022 (Emmanuel)

Chong et al. Human confidence in artificial intelligence and in themselves: The evolution and impact of confidence on adoption of AI advice. Computers in Human Behavior, 2022 (Akshay)

Li et al. Modeling Human Trust and Reliance in AI-Assisted Decision Making: A Markovian Approach. AAAI'23 (Yijiang)

Chen et al. Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations. CSCW'23 (Aline)

Assignment: Due by EOD, Feb 26
Mar 4 Trust and Reliance on AI: Intervention Designs

Leading presenters: Emmanuel, Harry

Required

Ma et al. Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. CHI'23

Bansal et al. Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork. AAAI'21

Optional

Bucinca et al. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. CSCW'21 (Nishtha)

Rastogi et al. Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making. CSCW'22 (Mohan)

Vodrahalli et al. Uncalibrated Models Can Improve Human-AI Collaboration. NeurIPS'22 (Yunsheng)

Benz and Rodriguez. Human-Aligned Calibration for AI-Assisted Decision Making. NeurIPS'23 (Hasan)

Cabrera et al. Improving Human-AI Collaboration With Descriptions of AI Behavior. CSCW'23 (Raymond)

Noti and Chen. Learning When to Advise Human Decision Makers. IJCAI'23 (Daniel Chen)

Li et al. Strategic Adversarial Attacks in AI-assisted Decision Making to Reduce Human Trust and Reliance. IJCAI'23 (Dominic)

Inkpen et al. Advancing Human-AI Complementarity: The Impact of User Expertise and Algorithmic Tuning on Joint Decision Making. ACM Transactions on Computer-Human Interaction, 2023 (Keerthana)

Mar 11 No class (Spring break)
Mar 18 Bias and Fairness in AI: Definitions and Methods

Leading presenters: Arjun, Keerthana, Jiaxin

Required

Angwin et al. Machine Bias. Propublica, 2016

Srinivasan and Chander. Biases in AI Systems: A survey for practitioners. Communications of ACM, 2021

Zafar et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. WWW'17

Optional

Hardt et al. Equality of Opportunity in Supervised Learning. NeurIPS'16 (Pangpang)

Caliskan et al. Semantics derived automatically from language corpora contain human-like biases. Science, 2017 (Yoonhyuck)

Kusner et al. Counterfactual Fairness. NeurIPS'17 (Yijiang)

Otterbacher et al. Competent Men and Warm Women: Gender Stereotypes and Backlash in Image Search Results. CHI'17 (Harry)

Hube et al. Understanding and Mitigating Worker Biases in the Crowdsourced Collection of Subjective Judgments. CHI'19 (Akshay)

Bellamy et al. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development, 2019 (Tongyan)

Karimi et al. Algorithmic Recourse: from Counterfactual Explanations to Interventions. FAccT'21 (Emmanuel)

Mehrabi et al. A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 2021 (Daniel Castro Mesa)

Mitchell et al. Algorithmic Fairness: Choices, Assumptions, and Definitions. Annual Review of Statistics and Its Application, 2021 (Aline)

Sap et al. Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection. NAACL'22 (Hasan)

Mar 25 Bias and Fairness in AI: Empirical Studies and Intervention Designs

Leading presenters: Raymond, Mohan, Dominic

Required

Wang et al. Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences. CHI'20

Gordon et al. Jury Learning: Integrating Dissenting Voices into Machine Learning Models. CHI'22

Green and Chen. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. FAT*'19

Optional

Liu et al. Delayed Impact of Fair Machine Learning. ICML'18 (Hairong)

Grgic-Hlaca et al. Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction. WWW'18 (Nishtha)

Srivastava et al. Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. KDD'19 (Pangpang)

Saxena et al. How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness. AIES'19 (Keerthana)

Dodge et al. Explaining models: An empirical study of how explanations impact fairness judgment. IUI'19 (Emmanuel)

Zhang et al. Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness. NeurIPS'19 (Jiaxin)

Zhang et al. How do fair decisions fare in long-term qualification?. NeurIPS'20 (Yunsheng)

Gemalmaz and Yin. Accounting for Confirmation Bias in Crowdsourced Label Aggregation. IJCAI'21 (Arjun)

Cheng et al. How Child Welfare Workers Reduce Racial Disparities in Algorithmic Decisions. CHI'22 (Tongyan)

Wang et al. The Effects of AI Biases and Explanations on Human Decision Fairness: A Case Study of Bidding in Rental Housing Markets. IJCAI'23 (Yijiang)

Project: Midterm report due (by EOD, Mar 27)
Apr 1 Human-AI Collaboration and Teaming

Leading presenters: Pangpang, Yunsheng

Required

Lai et al. Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation. CHI'22

Hong et al. Learning to Influence Human Behavior with Offline Reinforcement Learning. NeurIPS'23

Optional

Madras et al. Predict Responsibly: Improving Fairness and Accuracy by Learning to Defer. NeurIPS'18 (Jiaxin)

Nushi et al. Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure. HCOMP'18 (Akshay)

Carroll et al. On the Utility of Learning about Humans for Human-AI Coordination. NeurIPS'19 (Hasan)

Chakraborti et al. Balancing Explicability and Explanations for Human-Aware Planning. IJCAI'19 (Arjun)

Gennatas et al. Expert-augmented machine learning. PNAS, 2020 (Yoonhyuck)

Xiao et al. FRESH: Interactive Reward Shaping in High-Dimensional State Spaces using Human Feedback. AAMAS'20 (Yunze)

Wilder et al. Learning to Complement Humans. IJCAI'20 (Raymond)

Gao et al. Human-AI Collaboration with Bandit Feedback. IJCAI'21 (Daniel Castro Mesa)

Steyvers et al. Bayesian modeling of human-AI complementarity. PNAS, 2022 (Harry)

Callaway et al. Leveraging artificial intelligence to improve people's planning strategies. PNAS, 2022 (Daniel Chen)

Schelble et al. Let's Think Together! Assessing Shared Mental Models, Performance, and Trust in Human-Agent Teams. GROUP'22 (Mohan)

Apr 8 Human Interaction with Large Language Models

Leading presenters: Hairong, Yoonhyuck, Yijiang

Required

Zamfirescu-Pereira et al. Why Johnny Can't Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. CHI'23

Jakesch et al. Co-Writing with Opinionated Language Models Affects Users' Views. CHI'23

Li et al. CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation. EMNLP'23

Optional

Wu et al. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. CHI'22 (Tongyan)

Yuan et al. Wordcraft: Story Writing With Large Language Models. IUI'22 (Aline)

Noy and Zhang. Experimental evidence on the productivity effects of generative artificial intelligence. Science, 2023 (Tania)

Argyle et al. Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. PNAS, 2023 (Keerthana)

Wang et al. PopBlends: Strategies for Conceptual Blending with Large Language Models. CHI'23 (Fu-Chia)

Chung et al. Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions. ACL'23 (Yunze)

Fok et al. Scim: Intelligent Skimming Support for Scientific Papers. IUI'23 (Emmanuel)

Rastogi et al. Supporting Human-AI Collaboration in Auditing LLMs with LLMs. AIES'23 (Nishtha)

Xiao et al. Supporting Qualitative Analysis with Large Language Models: Combining Codebook with GPT-3 for Deductive Coding. IUI'23 Companion (Arjun)

Apr 15 AI, Ethics, and Society

Leading presenters: Yunze, Aline

Required

Awad et al. The Moral Machine experiment. Nature, 2018

Gabriel. Artificial Intelligence, Values, and Alignment. Minds and Machines, 2020

Optional

Conitzer et al. Moral Decision Making Frameworks for Artificial Intelligence. AAAI'17 (Tania)

Zhu et al. Value-Sensitive Algorithm Design: Method, Case Study, and Lessons. CSCW'18 (Mohan)

Smith et al. Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems. CHI'20 (Fu-Chia)

Tolmeijer et al. Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making. CHI'22 (Daniel Castro Mesa)

Zhang et al. Artificial intelligence and moral dilemmas: Perception of ethical decision-making in AI. Journal of Experimental Social Psychology, 2022 (Yoonhyuck)

Weidinger et al. Using the Veil of Ignorance to align AI systems with principles of justice. PNAS, 2023 (Hairong)

Narayanan et al. How does Value Similarity affect Human Reliance in AI-Assisted Ethical Decision Making?. AIES'23 (Raymond)

Rezwana and Maher. User Perspectives on Ethical Challenges in Human-AI Co-Creativity: A Design Fiction Study. C&C'23 (Yijiang)

Apr 22 Final project presentation Project: Final project due (by EOD, Apr 28)