- Literature and Cultural Memory
- Ethics and Social Impacts of AI
- Memory and Neural Mechanisms
- German Literature and Culture Studies
- Neural dynamics and brain function
- Topic Modeling
- Natural Language Processing Techniques
- Functional Brain Connectivity Studies
- Explainable Artificial Intelligence (XAI)
- Neural and Behavioral Psychology Studies
- Adversarial Robustness in Machine Learning
- Memory Processes and Influences
- Face Recognition and Perception
- Speech and dialogue systems
- Linguistic research and analysis
- German History and Society
- Translation Studies and Practices
- European Cultural and National Identity
- Multimodal Machine Learning Applications
- Hate Speech and Cyberbullying Detection
- Digital Economy and Work Transformation
- Visual Attention and Saliency Detection
- Modernist Literature and Criticism
- Poetry Analysis and Criticism
- Generative Adversarial Networks and Image Synthesis
University of Oregon
2018-2025
University of Kent
2008-2023
Google (United States)
2007-2021
Northeastern University
2018
Princeton University
2012-2016
Stanford University
2009-2015
Stratford University
2009
University of Edinburgh
2003-2005
St. John's College
2005
Large language models have been shown to achieve remarkable performance across a variety of natural tasks using few-shot learning, which drastically reduces the number task-specific training examples needed adapt model particular application. To further our understanding impact scale on we trained 540-billion parameter, densely activated, Transformer model, call Pathways Language Model PaLM. We PaLM 6144 TPU v4 chips Pathways, new ML system enables highly efficient multiple Pods. demonstrate...
Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order clarify the intended use cases of minimize their usage contexts for which they not well suited, we recommend that released be accompanied by documentation detailing performance characteristics. this paper, propose a framework call model cards, encourage transparent reporting. Model cards short documents accompanying trained provide...
We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized dialog, which have up to 137B parameters and are pre-trained on 1.56T words public dialog data web text. While model scaling alone can improve quality, it shows less improvements safety factual grounding. demonstrate that fine-tuning with annotated enabling the consult external knowledge sources lead significant towards two key challenges The first challenge,...
Rising concern for the societal implications of artificial intelligence systems has inspired a wave academic and journalistic literature in which deployed are audited harm by investigators from outside organizations deploying algorithms. However, it remains challenging practitioners to identify harmful repercussions their own prior deployment, and, once deployed, emergent issues can become difficult or impossible trace back source.
We present the Pathways Autoregressive Text-to-Image (Parti) model, which generates high-fidelity photorealistic images and supports content-rich synthesis involving complex compositions world knowledge. Parti treats text-to-image generation as a sequence-to-sequence modeling problem, akin to machine translation, with sequences of image tokens target outputs rather than text in another language. This strategy can naturally tap into rich body prior work on large language models, have seen...
Building equitable and inclusive NLP technologies demands consideration of whether how social attitudes are represented in ML models. In particular, representations encoded models often inadvertently perpetuate undesirable biases from the data on which they trained. this paper, we present evidence such towards mentions disability two different English language models: toxicity prediction sentiment analysis. Next, demonstrate that neural embeddings critical first step most pipelines similarly...
Quantitative definitions of what is unfair and fair have been introduced in multiple disciplines for well over 50 years, including education, hiring, machine learning. We trace how the notion fairness has defined within testing communities education hiring past half century, exploring cultural social context which different emerged. In some cases, earlier are similar or identical to current learning research, foreshadow formal work. other insights into means measure it largely gone...
Datasets that power machine learning are often used, shared, and reused with little visibility into the processes of deliberation led to their creation. As artificial intelligence systems increasingly used in high-stakes tasks, system development deployment practices must be adapted address very real consequences how model data is constructed practice. This includes greater transparency about data, accountability for decisions made when developing it. In this paper, we introduce a rigorous...
The last two decades of neuroscience research has produced a growing number studies that suggest the various psychological phenomena are by predictive processes in brain. When considered together, these form coherent, neurobiologically-inspired program for guiding about mind and behavior. In this paper, we briefly consider common assumptions hypotheses unify an emerging framework discuss its ramifications, both improving replicability robustness innovating theory suggesting alternative...
Conventional algorithmic fairness is West-centric, as seen in its subgroups, values, and methods. In this paper, we de-center analyse AI power India. Based on 36 qualitative interviews a discourse analysis of deployments India, find that several assumptions are challenged. We data not always reliable due to socio-economic factors, ML makers appear follow double standards, evokes unquestioning aspiration. contend localising model alone can be window dressing where the distance between models...
While attention is critical for event memory, debate has arisen regarding the extent to which posterior parietal cortex (PPC) activation during episodic retrieval reflects engagement of PPC-mediated mechanisms attention. Here, we directly examined relationship between and within across subjects, using functional magnetic resonance imaging attention-mapping paradigms. During retrieval, 4 functionally dissociable PPC regions were identified. Specifically, 2 positively tracked outcomes: lateral...
It is well established that the formation of memories for life's experiences-episodic memory-is influenced by how we attend to those experiences, yet neural mechanisms which attention shapes episodic encoding are still unclear. We investigated top-down and bottom-up contribute memory visual objects in humans manipulating both types during fMRI formation. show dorsal parietal cortex-specifically, intraparietal sulcus (IPS)-was engaged was also recruited successful memories. By contrast,...
We have designed, implemented and evaluated an end-to-end system spellchecking autocorrection that does not require any manually annotated training data.The World Wide Web is used as a large noisy corpus from which we infer knowledge about misspellings word usage.This to build error model n-gram language model.A small secondary set of news texts with artificially inserted are tune confidence classifiers.Because no manual annotation required, our can easily be instantiated for new...
The essential role of the medial temporal lobe (MTL) in long-term memory for individual events is well established, yet important questions remain regarding mnemonic functions component structures that constitute region. Within hippocampus, recent functional neuroimaging findings suggest formation new memories depends on dentate gyrus and CA(3) field, whereas contribution subiculum may be limited to retrieval. During encoding, it has been further hypothesized within MTL cortex contribute...
Vinodkumar Prabhakaran, Ben Hutchinson, Margaret Mitchell. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint (EMNLP-IJCNLP). 2019.
Rising concern for the societal implications of artificial intelligence systems has inspired a wave academic and journalistic literature in which deployed are audited harm by investigators from outside organizations deploying algorithms. However, it remains challenging practitioners to identify harmful repercussions their own prior deployment, and, once deployed, emergent issues can become difficult or impossible trace back source. In this paper, we introduce framework algorithmic auditing...
The ethical concept of fairness has recently been applied in machine learning (ML) settings to describe a wide range constraints and objectives. When considering the relevance concepts subset selection problems, diversity inclusion are additionally applicable order create outputs that account for social power access differentials. We introduce metrics based on these concepts, which can be together, separately, tandem with additional constraints. Results from human subject experiments lend...
Gradient mapping is an important technique to summarize high dimensional biological features as low manifold representations in exploring brain structure-function relationships at various levels of the cerebral cortex. While recent studies have characterized major gradients functional connectivity several structures using this technique, very few systematically examined correspondence such across under a common systems-level framework. Using resting-state magnetic resonance imaging, here we...