Pratyusha Kalluri

ORCID: 0000-0001-7202-8027
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Ethics and Social Impacts of AI
  • Explainable Artificial Intelligence (XAI)
  • Adversarial Robustness in Machine Learning
  • Media Influence and Politics
  • Artificial Intelligence in Healthcare and Education
  • Topic Modeling
  • Domain Adaptation and Few-Shot Learning
  • Cognitive Science and Mapping
  • Multimodal Machine Learning Applications
  • Complex Systems and Decision Making
  • Online Learning and Analytics
  • Hate Speech and Cyberbullying Detection
  • Computational and Text Analysis Methods
  • Natural Language Processing Techniques
  • Big Data and Business Intelligence
  • Artificial Intelligence in Law
  • Social Media and Politics
  • Advanced Image and Video Retrieval Techniques
  • Privacy-Preserving Technologies in Data
  • Software Engineering Research

Stanford University
2018-2024

IIT@MIT
2017

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and adaptable to wide range downstream tasks. We call these foundation underscore their critically central yet incomplete character. This report provides thorough account opportunities risks models, ranging from capabilities language, vision, robotics, reasoning, human interaction) technical principles(e.g., model architectures, training procedures, data, systems,...

10.48550/arxiv.2108.07258 preprint EN cc-by arXiv (Cornell University) 2021-01-01

Machine learning currently exerts an outsized influence on the world, increasingly affecting institutional practices and impacted communities. It is therefore critical that we question vague conceptions of field as value-neutral or universally beneficial, investigate what specific values advancing. In this paper, first introduce a method annotation scheme for studying encoded in documents such research papers. Applying scheme, analyze 100 highly cited machine papers published at premier...

10.1145/3531146.3533083 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2022-06-20

Machine learning models that convert user-written text descriptions into images are now widely available online and used by millions of users to generate a day. We investigate the potential for these amplify dangerous complex stereotypes. find broad range ordinary prompts produce stereotypes, including simply mentioning traits, descriptors, occupations, or objects. For example, we cases prompting basic traits social roles resulting in reinforcing whiteness as ideal, occupations amplification...

10.1145/3593013.3594095 article EN 2022 ACM Conference on Fairness, Accountability, and Transparency 2023-06-12

Hundreds of millions people now interact with language models, uses ranging from help writing

10.1038/s41586-024-07856-5 article EN cc-by Nature 2024-08-28

Hundreds of millions people now interact with language models, uses ranging from serving as a writing aid to informing hiring decisions. Yet these models are known perpetuate systematic racial prejudices, making their judgments biased in problematic ways about groups like African Americans. While prior research has focused on overt racism social scientists have argued that more subtle character developed over time. It is unknown whether this covert manifests models. Here, we demonstrate...

10.48550/arxiv.2403.00742 preprint EN arXiv (Cornell University) 2024-03-01

A rapidly growing number of voices argue that AI research, and computer vision in particular, is powering mass surveillance. Yet the direct path from research to surveillance has remained obscured difficult assess. Here, we reveal Surveillance pipeline by analyzing three decades papers downstream patents, more than 40,000 documents. We find large majority annotated patents self-report their technology enables extracting data about humans. Moreover, these technologies specifically enable...

10.48550/arxiv.2309.15084 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Despite the success of large vision and language models (VLMs) in many downstream applications, it is unclear how well they encode compositional information. Here, we create Attribution, Relation, Order (ARO) benchmark to systematically evaluate ability VLMs understand different types relationships, attributes, order. ARO consists Visual Genome test understanding objects' properties; for relational understanding; COCO & Flickr30k-Order, order sensitivity. orders magnitude larger than...

10.48550/arxiv.2210.01936 preprint EN cc-by arXiv (Cornell University) 2022-01-01

Learning data representations that are transferable and fair with respect to certain protected attributes is crucial reducing unfair decisions while preserving the utility of data. We propose an information-theoretically motivated objective for learning maximally expressive subject fairness constraints. demonstrate a range existing approaches optimize approximations Lagrangian dual our objective. In contrast these approaches, allows user control by specifying limits on unfairness. Exploiting...

10.48550/arxiv.1812.04218 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Are members of marginalized communities silenced on social media when they share personal experiences racism? Here, we investigate the role algorithms, humans, and platform guidelines in suppressing disclosures racial discrimination. In a field study actual posts from neighborhood-based platform, find that users talk about their as targets racism, are disproportionately flagged for removal toxic by five widely used moderation algorithms major online platforms, including most recent large...

10.1073/pnas.2322764121 article EN cc-by-nc-nd Proceedings of the National Academy of Sciences 2024-09-09

Machine learning models that convert user-written text descriptions into images are now widely available online and used by millions of users to generate a day. We investigate the potential for these amplify dangerous complex stereotypes. find broad range ordinary prompts produce stereotypes, including simply mentioning traits, descriptors, occupations, or objects. For example, we cases prompting basic traits social roles resulting in reinforcing whiteness as ideal, occupations amplification...

10.48550/arxiv.2211.03759 preprint EN other-oa arXiv (Cornell University) 2022-01-01

Machine learning currently exerts an outsized influence on the world, increasingly affecting institutional practices and impacted communities. It is therefore critical that we question vague conceptions of field as value-neutral or universally beneficial, investigate what specific values advancing. In this paper, first introduce a method annotation scheme for studying encoded in documents such research papers. Applying scheme, analyze 100 highly cited machine papers published at premier...

10.48550/arxiv.2106.15590 preprint EN cc-by-nc-sa arXiv (Cornell University) 2021-01-01

10.5220/0006205506400647 article EN cc-by-nc-nd Proceedings of the 14th International Conference on Agents and Artificial Intelligence 2017-01-01
Coming Soon ...