Emanuel Moss

ORCID: 0000-0002-3850-2677
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Ethics and Social Impacts of AI
  • Innovative Human-Technology Interaction
  • Neuroethics, Human Enhancement, Biomedical Innovations
  • Mobile Crowdsensing and Crowdsourcing
  • Artificial Intelligence in Healthcare and Education
  • Law, AI, and Intellectual Property
  • Blockchain Technology Applications and Security
  • 3D Surveying and Cultural Heritage
  • Ethics in Business and Education
  • Adversarial Robustness in Machine Learning
  • Explainable Artificial Intelligence (XAI)
  • Virtual Reality Applications and Impacts
  • Information Systems Theories and Implementation
  • Archaeological Research and Protection
  • Education, Law, and Society
  • Migration, Ethnicity, and Economy
  • Balkans: History, Politics, Society
  • Computational and Text Analysis Methods
  • Conservation Techniques and Studies
  • Corporate Social Responsibility Reporting
  • COVID-19 Digital Contact Tracing
  • Focus Groups and Qualitative Methods
  • Image Processing and 3D Reconstruction
  • solar cell performance optimization
  • Cultural Heritage Management and Preservation

Intel (United States)
2022-2024

Data & Society Research Institute
2019-2023

Cornell University
2022-2023

Digital Science (United States)
2022

The Graduate Center, CUNY
2015-2021

City University of New York
2020

Owning Ethics:Corporate Logics, Silicon Valley, and the Institutionalization of Ethics Jacob Metcalf (bio), Emanuel Moss danah boyd (bio) INTRODUCTION is arguably hottest product in Valley's1 hype cycle today, even as headlines decrying a lack ethics technology companies accumulate. After years largely fruitless outside pressure to consider consequences digital products, very recent past has seen spike assignment corporate resources Valley ethics, including hiring staff for roles we identify...

10.1353/sor.2019.0022 article EN Deleted Journal 2019-06-01

This paper critiques popular modes of participation in design practice and machine learning. It examines three existing kinds learning as work, consultation, justice – to argue that the community must become attuned possibly exploitative extractive forms involvement shift away from prerogatives context independent scalability. Cautioning against "participation washing", it argues notion "participation" should be expanded acknowledge more subtle, exploitative, participatory design....

10.1145/3551624.3555285 article EN 2022-10-06

This article modifies an old archaeological adage—"excavation is destruction"—to demonstrate how advances in practice suggest a new iteration: "excavation digitization." Digitization, fully digital paradigm, refers to practices that leverage onsite, image-based modeling and volumetric recording, integrated databases, data sharing. Such were implemented 2014 during the inaugural season of Kaymakçı Archaeological Project (KAP) western Turkey. The KAP recording system, developed from inception...

10.1179/2042458215y.0000000004 article EN cc-by-nc Journal of Field Archaeology 2015-05-29

Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. They modeled after in other domains. Our study the history shows "impacts" evaluative construct enable actors to identify ameliorate harms experienced because a policy decision or system. Every domain has different expectations norms around what constitutes impacts harms, how potential rendered as particular undertaking, who is responsible...

10.1145/3442188.3445935 article EN 2021-03-01

In 1996, Accountability in a Computerized Society [95] issued clarion call concerning the erosion of accountability society due to ubiquitous delegation consequential functions computerized systems. Nissenbaum described four barriers that computerization presented, which we revisit relation ascendance data-driven algorithmic systems—i.e., machine learning or artificial intelligence—to uncover new challenges for these systems present. Nissenbaum's original paper grounded discussion moral...

10.1145/3531146.3533150 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2022-06-20

10.1038/s42256-019-0084-6 article EN Nature Machine Intelligence 2019-08-09

The Algorithmic Impact Assessment is a new concept for regulating algorithmic systems and protecting the public interest. Assembling Accountability: Public Interest report that maps challenges of constructing impact assessments (AIAs) provides framework evaluating effectiveness current proposed AIA regimes. This practical tool regulators, advocates, public-interest technologists, technology companies, critical scholars who are identifying, assessing, acting upon harms.First, authors Emanuel...

10.2139/ssrn.3877437 article EN SSRN Electronic Journal 2021-01-01

Recent research has explored computational tools to manage workplace stress via personal sensing, a measurement paradigm in which behavioral data streams are collected from technologies including smartphones, wearables, and computers. As these develop, they invite inquiry into how can be appropriately implemented towards improving workers' well-being. In this study, we proposition through formative interviews followed by design provocation centered around measuring burnout U.S. resident...

10.1145/3555531 article EN Proceedings of the ACM on Human-Computer Interaction 2022-11-07

Industrial, academic, activist, and policy research advocacy movements formed around resisting ‘machine bias’, promoting ‘ethical AI’, ‘fair ML’ have discursive implications for what constitutes harm, resistance to algorithmic influence itself means, is deeply connected which actors makes epistemic claims about harm resistance. We present a loose categorization of kinds systems: dominant mode as ‘filtering up’ being translated into design fixes by Big Tech; scholarship bring critical frame...

10.1177/1329878x221076288 article EN cc-by-nc Media International Australia 2022-02-07

While natural language processing affords researchers an opportunity to automatically scan millions of social media posts, there is growing concern that automated computational tools lack the ability understand context and nuance in human communication language. This article introduces a critical systematic approach for extracting culture, data. The Contextual Analysis Social Media (CASM) considers critiques gap between inadequacies differences geographic, cultural, age-related variance use...

10.1145/3375627.3375841 article EN 2020-02-04

In The Order of Things (1966), Michel Foucault builds an opening meditation on Diego Velazquez’s 17th century painting Las Meninas into a sweeping exploration the changing relationship between t...

10.1080/17530350.2021.1882539 article EN Journal of Cultural Economy 2021-02-18

This paper critically examines existing modes of participation in design practice and machine learning. Cautioning against 'participation-washing', it suggests that the ML community must become attuned to possibly exploitative extractive forms involvement shift away from prerogatives context-independent scalability.

10.48550/arxiv.2007.02423 preprint EN cc-by arXiv (Cornell University) 2020-01-01

Abstract The introduction of a new generation AI systems has kicked off another wave hype. Now that have added the ability to produce content their predictive capabilities, extreme excitement about alleged capabilities and opportunities is matched only by long held fears job loss machine control. We typically understand dynamics hype be something happens us, but in this commentary, we propose flip script. suggest not social fact, widely shared practice. outline some negative implications...

10.1007/s43681-024-00481-y article EN cc-by AI and Ethics 2024-04-24

Frequent public uproar over forms of data science that rely on information about people demonstrates the challenges defining and demonstrating trustworthy digital research practices. This paper reviews problems trustworthiness in what we term pervasive research: scholarship relies rich generated through interaction. We highlight entwined participant unawareness such relationship to corporate datafication surveillance. suggest a way forward by drawing from history different methodological...

10.1177/20539517211040759 article EN Big Data & Society 2021-07-01

This paper introduces v0.5 of the AI Safety Benchmark, which has been created by MLCommons Working Group. The Benchmark designed to assess safety risks systems that use chat-tuned language models. We introduce a principled approach specifying and constructing benchmark, for covers only single case (an adult chatting general-purpose assistant in English), limited set personas (i.e., typical users, malicious vulnerable users). new taxonomy 13 hazard categories, 7 have tests benchmark. plan...

10.48550/arxiv.2404.12241 preprint EN arXiv (Cornell University) 2024-04-18

In widely used sociological descriptions of how accountability is structured through institutions, an "actor" (e.g., the developer) accountable to a "forum" regulatory agencies) empowered pass judgements on and demand changes from actor or enforce sanctions. However, questions about structuring persist: why forum compelled keep making demands when such are called for? To whom in performance its responsibilities, can practices decisions be contested? context algorithmic accountability, we...

10.1145/3593013.3594092 article EN cc-by-nc-nd 2022 ACM Conference on Fairness, Accountability, and Transparency 2023-06-12

Motion capture systems, used across various domains, make body representations concrete through technical processes. We argue that the measurement of bodies and validation measurements for motion systems can be understood as social practices. By analyzing findings a systematic literature review (N=278) lens practice theory, we show how these practices, their varying attention to errors, become ingrained in design innovation over time. Moreover, contemporary perpetuate assumptions about human...

10.1145/3613904.3642004 article EN cc-by-nd 2024-05-11

Algorithmic decision-making and decision-support systems (ADS) are gaining influence over how society distributes resources, administers justice, provides access to opportunities. Yet collectively we do not adequately study these affect people or document the actual potential harms resulting from their integration with important social functions. This is a significant challenge for computational justice efforts of measuring governing AI systems. Impact assessments often used as instruments...

10.1145/3461702.3462580 article EN 2021-07-21

Algorithmic impact assessments (AIA) are increasingly being proposed as a mechanism for algorithmic accountability. These seen potentially useful anticipating, avoiding, and mitigating the negative consequences of decision-making systems (ADS). At same time, what an AIA would entail remains under-specified. While promising, AIAs raise many questions they answer. Choices about methods, scope, purpose structure possible governance outcomes. Decisions type effects count impact, when impacts...

10.2139/ssrn.3584818 article EN SSRN Electronic Journal 2020-01-01
Coming Soon ...