Solon Barocas

ORCID: 0000-0003-4577-466X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Ethics and Social Impacts of AI
  • Explainable Artificial Intelligence (XAI)
  • Adversarial Robustness in Machine Learning
  • Privacy-Preserving Technologies in Data
  • Privacy, Security, and Data Protection
  • Digital Economy and Work Transformation
  • Sexuality, Behavior, and Technology
  • Law, AI, and Intellectual Property
  • Health Systems, Economic Evaluations, Quality of Life
  • Decision-Making and Behavioral Economics
  • Experimental Behavioral Economics Studies
  • Law, Economics, and Judicial Systems
  • Hate Speech and Cyberbullying Detection
  • Topic Modeling
  • Economic and Environmental Valuation
  • Domain Adaptation and Few-Shot Learning
  • Wikis in Education and Collaboration
  • Artificial Intelligence in Law
  • Qualitative Comparative Analysis Research
  • Innovative Human-Technology Interaction
  • Multimodal Machine Learning Applications
  • Mobile Crowdsensing and Crowdsourcing
  • Blockchain Technology Applications and Security
  • Big Data and Business Intelligence
  • Social Media and Politics

Microsoft (United States)
2017-2025

Microsoft Research New York City (United States)
2017-2025

Microsoft Research (United Kingdom)
2013-2024

Cornell University
2013-2024

University of Chicago
2020

Google (United States)
2020

University of Massachusetts Amherst
2020

University of Maryland, College Park
2020

Princeton University
2014-2017

Princeton Public Schools
2016

Advocates of algorithmic techniques like data mining argue that these eliminate human biases from the decision-making process. But an algorithm is only as good it works with. Data frequently imperfect in ways allow algorithms to inherit prejudices prior decision makers. In other cases, may simply reflect widespread persist society at large. still others, can discover surprisingly useful regularities are really just preexisting patterns exclusion and inequality. Unthinking reliance on deny...

10.2139/ssrn.2477899 article EN SSRN Electronic Journal 2016-01-01

Advocates of algorithmic techniques like data mining argue that these eliminate human biases from the decision-making process. But an algorithm is only as good it works with. Data frequently imperfect in ways allow algorithms to inherit prejudices prior decision makers. In other cases, may simply reflect widespread persist society at large. still others, can discover surprisingly useful regularities are really just preexisting patterns exclusion and inequality. Unthinking reliance on deny...

10.15779/z38bg31 article EN California Law Review 2016-01-01

We survey 146 papers analyzing “bias” in NLP systems, finding that their motivations are often vague, inconsistent, and lacking normative reasoning, despite the fact is an inherently process. further find these papers’ proposed quantitative techniques for measuring or mitigating poorly matched to do not engage with relevant literature outside of NLP. Based on findings, we describe beginnings a path forward by proposing three recommendations should guide work systems. These rest greater...

10.18653/v1/2020.acl-main.485 article EN cc-by 2020-01-01

A recent flurry of research activity has attempted to quantitatively define "fairness" for decisions based on statistical and machine learning (ML) predictions. The rapid growth this new field led wildly inconsistent terminology notation, presenting a serious challenge cataloguing comparing definitions. This paper attempts bring much-needed order. First, we explicate the various choices assumptions made---often implicitly---to justify use prediction-based decisions. Next, show how such can...

10.1146/annurev-statistics-042720-125902 article EN other-oa Annual Review of Statistics and Its Application 2020-11-10

There has been rapidly growing interest in the use of algorithms hiring, especially as a means to address or mitigate bias. Yet, date, little is known about how these methods are used practice. How algorithmic assessments built, validated, and examined for bias? In this work, we document analyze claims practices companies offering employment assessment. particular, identify vendors pre-employment (i.e., screen candidates), what they have disclosed their development validation procedures,...

10.1145/3351095.3372828 preprint EN 2020-01-27

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines sets machine learning apart from other ways of developing rules for and the problem these properties pose explanation. We show that models can be both inscrutable nonintuitive are related, distinct, properties.Calls explanation have treated problems as one same, disentangling two reveals they demand very different responses. Dealing...

10.2139/ssrn.3126971 article EN SSRN Electronic Journal 2018-01-01

A recent normative turn in computer science has brought concerns about fairness, bias, and accountability to the core of field. Yet scholarship warned that much this technical work treats problematic features status quo as fixed, fails address deeper patterns injustice inequality. While acknowledging these critiques, we posit computational research valuable roles play addressing social problems -- whose value can be recognized even from a perspective aspires toward fundamental change. In...

10.1145/3351095.3372871 preprint EN 2020-01-27

Formulating data science problems is an uncertain and difficult process. It requires various forms of discretionary work to translate high-level objectives or strategic goals into tractable problems, necessitating, among other things, the identification appropriate target variables proxies. While these choices are rarely self-evident, normative assessments projects often take them for granted, even though different translations can raise profoundly ethical concerns. Whether we consider a...

10.1145/3287560.3287567 preprint EN 2019-01-09

Consumer‐sourced rating systems are a dominant method of worker evaluation in platform‐based work. These facilitate the semi‐automated management large, disaggregated workforces, and rapid growth service platforms—but may also represent potential avenue for employment discrimination that negatively impacts members legally protected groups. We analyze Uber platform as case study to explore how bias creep into evaluations drivers through consumer‐sourced systems, draw on social science...

10.1002/poi3.153 article EN Policy & Internet 2017-06-28

Algorithms have developed into somewhat of a modern myth. They "compet[e] for our living rooms" (Slavin 2011), "determine how billion plus people get where they're going" (McGee "have already written symphonies as moving those composed by Beethoven" (Steiner 2012), and "free us from sorting through multitudes irrelevant results" (Spring 2011). Nevertheless, the nature implications such orderings are far clear. What exactly is it that algorithms "do"? role attributed to "algorithms" in these...

10.2139/ssrn.2245322 article EN SSRN Electronic Journal 2013-01-01

Recognizing the inherent limitations of consent and anonymity.

10.1145/2668897 article EN Communications of the ACM 2014-10-27

Counterfactual explanations are gaining prominence within technical, legal, and business circles as a way to explain the decisions of machine learning model. These share trait with long-established "principal reason" required by U.S. credit laws: they both decision highlighting set features deemed most relevant--and withholding others. "feature-highlighting explanations" have several desirable properties: They place no constraints on model complexity, do not require disclosure, detail what...

10.1145/3351095.3372830 preprint EN 2020-01-23

Recent scholarship has brought attention to the fact that there often exist multiple models for a given prediction task with equal accuracy differ in their individual-level predictions or aggregate properties. This phenomenon—which we call model multiplicity—can introduce good deal of flexibility into selection process, creating range exciting opportunities. By demonstrating are many different ways making equally accurate predictions, multiplicity gives developers freedom prioritize other...

10.1145/3531146.3533149 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2022-06-20

We formalize predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes interest about individuals . For example, pre-trial risk prediction such as COMPAS ML whether an individual will re-offend in the future. Our thesis is optimization raises distinctive and serious set normative concerns cause it fail on its own terms. To test this, we review 387 reports, articles, web pages from academia, industry, non-profits, governments,...

10.1145/3636509 article EN other-oa ACM Journal on Responsible Computing 2023-12-13

Variance in predictions across different trained models is a significant, under-explored source of error fair binary classification. In practice, the variance on some data examples so large that decisions can be effectively arbitrary. To investigate this problem, we take an experimental approach and make four overarching contributions. We: 1) Define metric called self-consistency, derived from variance, which use as proxy for measuring reducing arbitrariness; 2) Develop ensembling algorithm...

10.1609/aaai.v38i20.30203 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Seeking more common ground between data scientists and their critics.

10.1145/3144172 article EN Communications of the ACM 2017-10-24

Designing technical systems to be resistant bias and discrimination represents vital new terrain for researchers, policymakers, the anti-discrimination project more broadly. We consider in context of popular online dating hookup platforms United States, which we call intimate platforms. Drawing on work social-justice-oriented Queer HCI, review design features their potential role exacerbating or mitigating interpersonal bias. argue that focusing platform can reveal opportunities reshape...

10.1145/3274342 article EN Proceedings of the ACM on Human-Computer Interaction 2018-11-01
Coming Soon ...