- Ethics and Social Impacts of AI
- Explainable Artificial Intelligence (XAI)
- Adversarial Robustness in Machine Learning
- Privacy-Preserving Technologies in Data
- Privacy, Security, and Data Protection
- Digital Economy and Work Transformation
- Sexuality, Behavior, and Technology
- Law, AI, and Intellectual Property
- Health Systems, Economic Evaluations, Quality of Life
- Decision-Making and Behavioral Economics
- Experimental Behavioral Economics Studies
- Law, Economics, and Judicial Systems
- Hate Speech and Cyberbullying Detection
- Topic Modeling
- Economic and Environmental Valuation
- Domain Adaptation and Few-Shot Learning
- Wikis in Education and Collaboration
- Artificial Intelligence in Law
- Qualitative Comparative Analysis Research
- Innovative Human-Technology Interaction
- Multimodal Machine Learning Applications
- Mobile Crowdsensing and Crowdsourcing
- Blockchain Technology Applications and Security
- Big Data and Business Intelligence
- Social Media and Politics
Microsoft (United States)
2017-2025
Microsoft Research New York City (United States)
2017-2025
Microsoft Research (United Kingdom)
2013-2024
Cornell University
2013-2024
University of Chicago
2020
Google (United States)
2020
University of Massachusetts Amherst
2020
University of Maryland, College Park
2020
Princeton University
2014-2017
Princeton Public Schools
2016
Advocates of algorithmic techniques like data mining argue that these eliminate human biases from the decision-making process. But an algorithm is only as good it works with. Data frequently imperfect in ways allow algorithms to inherit prejudices prior decision makers. In other cases, may simply reflect widespread persist society at large. still others, can discover surprisingly useful regularities are really just preexisting patterns exclusion and inequality. Unthinking reliance on deny...
Advocates of algorithmic techniques like data mining argue that these eliminate human biases from the decision-making process. But an algorithm is only as good it works with. Data frequently imperfect in ways allow algorithms to inherit prejudices prior decision makers. In other cases, may simply reflect widespread persist society at large. still others, can discover surprisingly useful regularities are really just preexisting patterns exclusion and inequality. Unthinking reliance on deny...
We survey 146 papers analyzing “bias” in NLP systems, finding that their motivations are often vague, inconsistent, and lacking normative reasoning, despite the fact is an inherently process. further find these papers’ proposed quantitative techniques for measuring or mitigating poorly matched to do not engage with relevant literature outside of NLP. Based on findings, we describe beginnings a path forward by proposing three recommendations should guide work systems. These rest greater...
A recent flurry of research activity has attempted to quantitatively define "fairness" for decisions based on statistical and machine learning (ML) predictions. The rapid growth this new field led wildly inconsistent terminology notation, presenting a serious challenge cataloguing comparing definitions. This paper attempts bring much-needed order. First, we explicate the various choices assumptions made---often implicitly---to justify use prediction-based decisions. Next, show how such can...
There has been rapidly growing interest in the use of algorithms hiring, especially as a means to address or mitigate bias. Yet, date, little is known about how these methods are used practice. How algorithmic assessments built, validated, and examined for bias? In this work, we document analyze claims practices companies offering employment assessment. particular, identify vendors pre-employment (i.e., screen candidates), what they have disclosed their development validation procedures,...
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines sets machine learning apart from other ways of developing rules for and the problem these properties pose explanation. We show that models can be both inscrutable nonintuitive are related, distinct, properties.Calls explanation have treated problems as one same, disentangling two reveals they demand very different responses. Dealing...
A recent normative turn in computer science has brought concerns about fairness, bias, and accountability to the core of field. Yet scholarship warned that much this technical work treats problematic features status quo as fixed, fails address deeper patterns injustice inequality. While acknowledging these critiques, we posit computational research valuable roles play addressing social problems -- whose value can be recognized even from a perspective aspires toward fundamental change. In...
Formulating data science problems is an uncertain and difficult process. It requires various forms of discretionary work to translate high-level objectives or strategic goals into tractable problems, necessitating, among other things, the identification appropriate target variables proxies. While these choices are rarely self-evident, normative assessments projects often take them for granted, even though different translations can raise profoundly ethical concerns. Whether we consider a...
Consumer‐sourced rating systems are a dominant method of worker evaluation in platform‐based work. These facilitate the semi‐automated management large, disaggregated workforces, and rapid growth service platforms—but may also represent potential avenue for employment discrimination that negatively impacts members legally protected groups. We analyze Uber platform as case study to explore how bias creep into evaluations drivers through consumer‐sourced systems, draw on social science...
Algorithms have developed into somewhat of a modern myth. They "compet[e] for our living rooms" (Slavin 2011), "determine how billion plus people get where they're going" (McGee "have already written symphonies as moving those composed by Beethoven" (Steiner 2012), and "free us from sorting through multitudes irrelevant results" (Spring 2011). Nevertheless, the nature implications such orderings are far clear. What exactly is it that algorithms "do"? role attributed to "algorithms" in these...
Recognizing the inherent limitations of consent and anonymity.
Counterfactual explanations are gaining prominence within technical, legal, and business circles as a way to explain the decisions of machine learning model. These share trait with long-established "principal reason" required by U.S. credit laws: they both decision highlighting set features deemed most relevant--and withholding others. "feature-highlighting explanations" have several desirable properties: They place no constraints on model complexity, do not require disclosure, detail what...
Recent scholarship has brought attention to the fact that there often exist multiple models for a given prediction task with equal accuracy differ in their individual-level predictions or aggregate properties. This phenomenon—which we call model multiplicity—can introduce good deal of flexibility into selection process, creating range exciting opportunities. By demonstrating are many different ways making equally accurate predictions, multiplicity gives developers freedom prioritize other...
We formalize predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes interest about individuals . For example, pre-trial risk prediction such as COMPAS ML whether an individual will re-offend in the future. Our thesis is optimization raises distinctive and serious set normative concerns cause it fail on its own terms. To test this, we review 387 reports, articles, web pages from academia, industry, non-profits, governments,...
Variance in predictions across different trained models is a significant, under-explored source of error fair binary classification. In practice, the variance on some data examples so large that decisions can be effectively arbitrary. To investigate this problem, we take an experimental approach and make four overarching contributions. We: 1) Define metric called self-consistency, derived from variance, which use as proxy for measuring reducing arbitrariness; 2) Develop ensembling algorithm...
Seeking more common ground between data scientists and their critics.
Designing technical systems to be resistant bias and discrimination represents vital new terrain for researchers, policymakers, the anti-discrimination project more broadly. We consider in context of popular online dating hookup platforms United States, which we call intimate platforms. Drawing on work social-justice-oriented Queer HCI, review design features their potential role exacerbating or mitigating interpersonal bias. argue that focusing platform can reveal opportunities reshape...