Below is an incomplete list of books and papers (in addition to required and optional papers that we discuss in class, grouped by topics) in the field of crowdsourcing and human-AI interaction for interested readers.

Books

Law and von Ahn. Human Computation. 2011

Molnar. Interpretable Machine Learning. 2019

Kearns and Roth. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. 2019

Understanding Crowd Workers: Demographics, Size, Work Experience, Concerns and More

Ross et al. Who are the Crowdworkers?: Shifting Demographics in Mechanical Turk. CHI'10 Extended Abstracts

Downs et al. Are Your Participants Gaming the System?: Screening Mechanical Turk Workers. CHI'10

Horton and Chilton. The Labor Economics of Paid Crowdsourcing. EC'10

Suri et al. Honesty in an Online Labor Market. HCOMP Workshop @ AAAI'11

Berinsky. Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk. Political Analysis, 2012

Paolacci and Chandler. Inside the Turk: Understanding Mechanical Turk as a Participant Pool. Current Directions in Psychological Science, 2014

Pavlick et al. The Language Demographics of Amazon Mechnical Turk. Transactions of the ACL, 2014

Chandler et al. Nonnaivete among Amazon Mechanical Turk Workers: Consequences and Solutions for Behavioral Researchers. Behavior Research Methods, March 2014

Bartneck et al. Comparing the Similarity of Responses Received from Studies in Amazon€s Mechanical Turk to Studies Conducted Online and with Direct Recruitment. PLOS ONE, 2015

Huff and Tingley. "Who are These People?" Evaluating the Demographic Characteristics and Political Preferences of MTurk Survey Respondents. Research and Politics, 2015

Brawley and Pury. Work Experience on MTurk. Computers in Human Behavior, January 2016

McInnies et al. Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk. CHI'16

Xia et al. "Our Privacy Needs to be Protected at All Costs": Crowd Workers' Privacy Experiences on Amazon Mechanical Turk. CSCW'18

Hara et al. A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk. CHI'18

Sannon and Cosley. Privacy, Power, and Invisible Labor on Amazon Mechanical Turk. CHI'19

Lascau et al. Monotasking or Multitasking: Designing for Crowdworkers' Preferences. CHI'19

Jacques and Kristensson. Crowdworker Econonmics in the Gig Economy. CHI'19

Williams et al. The Perpetual Work Life of Crowdworkers: How Tooling Practices Increase Fragmentation in Crowdwork. CSCW'19

Crowdsourcing Incentive Design and Control

Mason and Watts. Financial Incentives and the "Performance of Crowds". HCOMP'09

Harris. You're Hired! An Examination of Crowdsourcing Incentive Models in Human Resource Tasks. CSDM Workshop @ WSDM'11

Kaufmann et al. More Than Fun and Money. Worker Motivation in Crowdsourcing--A Study on Mechanical Turk. AMCIS'11

Faridani et al. What's the Right Price? Pricing Tasks for Finishing on Time. HCOMP Workshop @ AAAI'11

Chen et al. Optimistic Knowledge Gradient Policy for Optimal Budget Allocation in Crowdsourcing. ICML'13

Huang and Fu. Don't Hide in the Crowd!: Increasing Social Transparency between Peer Workers Improves Crowdsourcing Outcomes. CHI'13

Tran-Thanh et al. Efficient Budget Allocation with Accuracy Guarantees for Crowdsourcing Classification Tasks. AAMAS'13

Mao et al. Volunteering Versus Work for Pay: Incentives and Tradeoffs in Crowdsourcing. HCOMP'13

Lee et al. Experiments on Motivational Feedback for Crowdsourced Workers. ICWSM'13

Raddick et al. Galaxy Zoo: Motivations of Citizen Scientists. 2013

Singer and Mittal. Pricing Mechanisms for Crowdsourcing Markets. WWW'13

Singla and Krause. Truthful Incentives in Crowdsourcing Tasks Using Regret Minimization Mechanisms. WWW'13

Difallah et al. Scaling-Up the Crowd: Micro-Task Pricing Schemes for Worker Retention and Latency Improvement. HCOMP'14

Rokicki et al. Competitive Game Designs for Improving the Cost Effectiveness of Crowdsourcing. CIKM'14

Teodoro et al. The Motivations and Experiences of the On-Demand Mobile Workforce. CSCW'14

Tran-Thanh et al. BudgetFix: Budget Limited Crowdsourcing for Interdependent Task Allocation with Quality Guarantees. AAMAS'14

Nov et al. Scientists@Home: What Drives the Quantity and Quality of Online Citizen Science Participation?. PLOS ONE, April 2014

Yin et al. Monetary Interventions in Crowdsourcing Task Switching. HCOMP'14

Dergousoff and Mandryk. Mobile Gamification for Crowdsourcing Data Collection: Leveraging the Freemium Model. CHI'15

Goran and Faltings. Learning to Scale Payments in Crowdsourcing with PropeRBoost. HCOMP'16

Ho et al. Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms for Repeated Principal-Agent Problems. Journal of Artificial Intelligence Research, 2016

Ikeda and Bernstein. Pay It Backward: Per-Task Payments on Crowdsourcing Platforms Reduce Productivity. CHI'16

Vaughan. Incentives and the Crowd. XRDS, Fall 2017

Xia and Muthukrishnan. Revenue-Maximizing Stable Pricing in Online Labor Markets. HCOMP'17

Ye et al. When Does More Money Work? Examining the Role of Perceived Fairness in Pay on the Performance Quality of Crowdworkers . ICWSM'17

Feyisetan and Simperl. Social Incentives in Paid Collaborative Crowdsourcing. TIST, September 2017

d'Eon et al. Paying Crowd Workers for Collaborative Work. CSCW'19

Task Assignment, Routing and Recommendation

Yuen et al. Task Recommendation in Crowdsourcing Systems. CrowdKDD'12

Celis et al. Adaptive Crowdsourcing for Temporal Crowds. WWW'13

Heidari and Kearns. Depth-Workload Tradoffs for Workforce Organization. HCOMP'13

Ho et al. Adaptive Task Assignment for Crowdsourced Classification. ICML'13

Bragg et al. Parallel Task Routing for Crowdsourcing. HCOMP'14

Goel et al. Mechanism Design for Crowdsourcing Markets with Heterogeneous Tasks. HCOMP'14

Tran-Thanh et al. BudgetFix: Budget Limited Crowdsourcing for Interdependent Task Allocation with Quality Guarantees. AAMAS'14

Tran-Thanh et al. Efficient Crowdsourcing of Unknown Experts Using Bounded Multi-Armed Bandits. Artificial Intelligence, September 2014.

Assadi et al. Online Assignment of Heterogeneous Tasks in Crowdsourcing Markets. HCOMP'15

Kobren et al. Getting More for Less: Optimized Crowdsourcing with Dynamic Tasks and Goals. WWW'15

Roy et al. Task Assignment Optimization in Knowledge-Intensive Crowdsourcing. VLDB Journal, August 2015

Zheng et al. QASCA: A Quality-Aware Task Assignment System for Crowdsourcing Applications. SIGMOD'15

Schnitzer et al. Perceived Task Similarities for Task Recommendation in Crowdsourcing Systems. WWW'16

Goncalves et al. Task Routing and Assignment in Crowdsourcing based on Cognitive Abilities. WWW'17

Pilourdault et al. Motivation-Aware Task Assignment in Crowdsourcing. EDBT, March 2017

Quality Assurance

Raykar et al. Supervised Learning from Multiple Experts: Whom to Trust When Everyone Lies a Bit. ICML'09

Ipeirotis et al. Quality Management on Amazon Mechanical Turk. HCOMP Workshop @ KDD'10

Karger et al. Iterative Learning for Reliable Crowdsourcing Systems. NIPS'11

Hansen et al. Quality Control Mechanisms for Crowdsourcing: Peer Review, Arbitration, & Expertise at FamilySearch Indexing. CSCW'13

Joglekar et al. Evaluating the Crowd with Confidence. KDD'13

Kajino et al. Clustering Crowds. AAAI'13

Mao et al. Better Human Computation Through Principled Voting. AAAI'13

Venanzi et al. Trust-Based Fusion of Untrustworthy Information in Crowdsourcing Applications. AAMAS'13

Aydin et al. Crowdsourcing for Multiple-Choice Question Answering. AAAI'14

Jagabathula et al. Reputation-Based Worker Filtering in Crowdsourcing. NIPS'14

Li et al. The Wisdom of Minority: Discovering and Targeting the Right Group of Workers for Crowdsourcing. WWW'14

Venanzi et al. Community-Based Bayesian Aggregation Models for Crowdsourcing. WWW'14

Kazai and Zitouni. Quality Management in Crowdsourcing using Gold Judges Behavior. WSDM'16

Augustin et al. Bayesian Aggregation of Categorical Distributions with Applications in Crowdsourcing. IJCAI'17

Gadiraju et al. Using Worker Self-Assessments for Competence-Based Pre-Selection in Crowdsourcing Microtasks. ACM Transactions on Computer-Human Interaction (TOCHI), September 2017

Daniel et al. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques, and Assurance Actions. ACM Computing Surveys, February 2018.

Tian et al. Selective Verification Strategy for Learning from Crowds. AAAI'18

Doroudi et al. Not Everyone Writes Good Examples but Good Examples Can Come from Anywhere. HCOMP'19

Engaging the Crowd

Mao et al. Why Stop Now? Predicting Worker Engagement in Online Crowdsourcing. HCOMP'13

Dontcheva et al. Combining Crowdsourcing and Learning to Improve Engagement and Performance. CHI'14

Eveleigh et al. Designing for Dabblers and Deterring Drop-Outs in Citizen Science. CHI'14

Segal et al. Improving Productivity in Citizen Science through Controlled Intervention. WWW'15

Park et al. AI-Based Request Augmentation to Increase Crowdsourcing Participation. HCOMP'19

Crowdsourcing Complex Tasks

Law and Zhang. Towards Large-Scale Collaborative Planning: Answering High-Level Search Queries Using Human Computation. AAAI'11

Ambati et al. Collaborative Workflow for Crowdsourcing Translation. CSCW'12

Willett et al. Strategies for Crowdsourcing Social Data Analysis. CHI'12

Andre et al. Crowd Synthesis: Extracting Categories and Clusters from Complex Data. CSCW'14

Verroios and Bernstein. Context Trees: Crowdsourcing Global Understanding from Local Views. HCOMP'14

Goto et al. Understanding Crowdsourcing Workflow: Modeling and Optimizing Iterative and Parallel Processes. HCOMP'16

Gebru et al. Scalable Annotation of Fine-Grained Categories Without Experts. CHI'17

Chen et al. Cicero: Multi-Turn, Contextual Argumentation for Accurate Crowdsourcing. CHI'19

Venkatagiri et al. GroundTruth: Augmenting Expert Image Geolocation with Crowdsourcing and Shared Representations. CSCW'19

Chung et al. Efficient Elicitation Approaches to Estimate Collective Crowd Answers. CSCW'19

Mohanty et al. Second Opinion: Supporting Last-Maile Person Idetnification with Crowdsourcing and Face Recognition. HCOMP'19

Interpretable ML Methods

Craven and Shavlik. Extracting Tree-Structured Representations of Trained Networks. NIPS'95

Chipman et al. Making Sense of a Forest of Trees. Computing Science and Statistics, 1998.

Poulin et al. Visual Explanation of Evidence in Additive Classifiers. IAAI'16

Johansson and Niklasson. Evolving Decision Trees Using Oracle Guides. CIDM'09

Strumbelj and Konoenko. An Efficient Explanation of Individual Classifications Using Game Theory. Journal of Machine Learning Research, March 2010

Lou et al. Intelligible Models for Classification and Regression. KDD'12

Lou et al. Accurate Intelligible Models with Pairwise Interactions. KDD'13

Kim et al. The Bayesian Case Model: A Generative Approach for Case-based Reasoning and Prototype Classification. NIPS'14

Xu et al. Show, Attend and Tell: Nueral Image Caption Generation with Visual Attention. ICML'15

Letham et al. Interpretable Classifiers Using Rules and Bayesian Analysis: Building a Better Stroke Prediction Model. The Annals of Applied Statistics, September 2015

Kim et al. Mind the Gap: A Generative Approach to Interpretable Feature Selection and Extraction. NIPS'15

Zhou et al. Learning Deep Features for Discriminative Localization. CVPR'16

Lei et al. Retionalizing Neural Predictions. ACL'16

Lakkaraju et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction. KDD'16

Kim et al. Examples Are Not Enough, Learn to Criticize! Criticism for Interpretability. NIPS'16

Selvaraju et al. Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization. ICCV'17

Fong and Vedaldi. Interpretable Explanations of Black Boxes by Meaningful Perturbation. ICCV'17

Zintgraf et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis. ICLR'17

Jung et al. Simple Rules for Complex Decisions. 2017

Wu et al. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability. AAAI'18

Ribeiro et al. Anchors: High-Precision Model-Agnostic Explanations. AAAI'18

Hara and Hayashi. Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach. AISTATS'18

Guidotti et al. Local Rule-Based Explanations of Black Box Decision Systems. 2018

Hase et al. Interpretable Image Recognition with Hierarchical Prototypes. HCOMP'19

Fair ML: Problems, Definitions and Methods

Dwork et al. Fairness through Awareness. ITCS'12

Zemel et al. Learning Fair Representations. ICML'13

Feldman et al. Certifying and Removing Disparate Impact. KDD'15

Bolukbasi et al. Man in Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. NIPS'16

Hardt et al. Equality of Opportunity in Supervised Learning. NIPS'16

Joseph et al. Fairness in Learning: CLassic and Contextual Bandits. NIPS'16

Chouldechova. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, Jun 2017

Jabbari et al. Fairness in Reinforcement Learning. ICML'17

Chen et al. Why is My Classifier Discriminatory?. NIPS'18

Kim et al. Fairness through Computationally-Bounded Awareness. NIPS'18

Buolamwini et al. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. ACM FAT*'18

Kearns et al. Preventing Fairness Gerrymandering: Auditing and Learning from Subgroup Fairness. ICML'18

Liu et al. Delayed Impact of Fair Machine Learning. ICML'18

HCI for Fair, Accountable, Transparant and Explainable AI

Krause et al. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. CHI'16

Abdul et al. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. CHI'18

Veale et al. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. CHI'18

Wang et al. Designing Theory-Driven User-Centric Explainable AI. CHI'19

Yang et al. Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes. CHI'19

Schaffer et al. I Can Do Better Than Your AI: Expertise and Explanations. IUI'19