Joshua B. Tenenbaum

ORCID: 0000-0002-1925-2035
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Child and Animal Learning Development
  • Bayesian Modeling and Causal Inference
  • Multimodal Machine Learning Applications
  • Topic Modeling
  • Reinforcement Learning in Robotics
  • Human Pose and Action Recognition
  • Language and cultural evolution
  • AI-based Problem Solving and Planning
  • Natural Language Processing Techniques
  • Explainable Artificial Intelligence (XAI)
  • Generative Adversarial Networks and Image Synthesis
  • Domain Adaptation and Few-Shot Learning
  • Advanced Vision and Imaging
  • Decision-Making and Behavioral Economics
  • 3D Shape Modeling and Analysis
  • Neural Networks and Applications
  • Machine Learning and Algorithms
  • Robot Manipulation and Learning
  • Advanced Image and Video Retrieval Techniques
  • Psychology of Moral and Emotional Judgment
  • Evolutionary Algorithms and Applications
  • Advanced Text Analysis Techniques
  • Evolutionary Game Theory and Cooperation
  • Cognitive Science and Mapping
  • Action Observation and Synchronization

Massachusetts Institute of Technology
2016-2025

Institute of Cognitive and Brain Sciences
2016-2025

IIT@MIT
2008-2024

Harvard University
2021-2024

Mitsubishi Electric (Japan)
2023

Moscow Institute of Thermal Technology
2009-2023

Allen Institute for Artificial Intelligence
2022-2023

Johns Hopkins University
2019-2022

Stanford University
1999-2022

University College London
2017-2022

Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem dimensionality reduction: finding meaningful low-dimensional structures hidden in their observations. The brain confronts same everyday perception, extracting from its sensory inputs—30,000 auditory nerve fibers 106 optic fibers—a manageably small number perceptually relevant features. Here we describe an approach to...

10.1126/science.290.5500.2319 article EN Science 2000-12-22

Handwritten characters drawn by a model Not only do children learn effortlessly, they so quickly and with remarkable ability to use what have learned as the raw material for creating new stuff. Lake et al. describe computational that learns in similar fashion does better than current deep learning algorithms. The classifies, parses, recreates handwritten characters, can generate letters of alphabet look “right” judged Turing-like tests model's output comparison real humans produce. Science ,...

10.1126/science.aab3050 article EN Science 2015-12-11

Recent progress in artificial intelligence has renewed interest building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end tasks such as object recognition, video games, board achieving performance equals or even beats of humans some respects. Despite their biological inspiration achievements, these differ human crucial ways. We review cognitive science suggesting truly human-like learning thinking machines will to reach...

10.1017/s0140525x16001837 article EN Behavioral and Brain Sciences 2016-11-24

We study the problem of 3D object generation. propose a novel framework, namely Generative Adversarial Network (3D-GAN), which generates objects from probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits our model are three-fold: first, use an criterion, instead traditional heuristic criteria, enables generator to capture structure implicitly synthesize high-quality objects; second, establishes mapping...

10.48550/arxiv.1610.07584 preprint EN other-oa arXiv (Cornell University) 2016-01-01

We present statistical analyses of the large-scale structure 3 types semantic networks: word associations, WordNet, and Roget's Thesaurus. show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, strong local clustering. In addition, distributions number connections follow power laws indicate scale-free pattern with most nodes having relatively few joined together through small hubs many connections. These regularities also...

10.1207/s15516709cog2901_3 article EN Cognitive Science 2005-01-01

Processing language requires the retrieval of concepts from memory in response to an ongoing stream information. This is facilitated if one can infer gist a sentence, conversation, or document and use that predict related disambiguate words. article analyzes abstract computational problem underlying extraction gist, formulating this as rational statistical inference. leads novel approach semantic representation which word meanings are represented terms set probabilistic topics. The topic...

10.1037/0033-295x.114.2.211 article EN Psychological Review 2007-01-01

The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. theory explains learners can generalize meaningfully from just one or few positive examples novel word's referents, by making rational inductive inferences that integrate prior knowledge about plausible word with statistical structure observed examples. addresses shortcomings two best known approaches to modeling learning, based on deductive hypothesis elimination associative...

10.1037/0033-295x.114.2.245 article EN Psychological Review 2007-01-01

Perceptual events derive their significance to an animal from meaning about the world, that is information they carry causes. The brain should thus be able efficiently infer causes underlying our sensory events. Here we use multisensory cue combination study causal inference in perception. We formulate ideal-observer model infers whether two cues originate same location and also estimates location(s). This accurately predicts nonlinear integration of by human subjects auditory-visual...

10.1371/journal.pone.0000943 article EN cc-by PLoS ONE 2007-09-26

Perceptual systems routinely separate "content" from "style," classifying familiar words spoken in an unfamiliar accent, identifying a font or handwriting style across letters, recognizing face object seen under viewing conditions. Yet general and tractable computational model of this ability to untangle the underlying factors perceptual observations remains elusive (Hofstadter, 1985). Existing factor models (Mardia, Kent, & Bibby, 1979; Hinton Zemel, 1994; Ghahramani, 1995; Bell Sejnowski,...

10.1162/089976600300015349 article EN Neural Computation 2000-06-01

Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well organisms from species or even planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function the form gradient, which accords strikingly wide range empirical data. However, his original formulation applied only to ideal case single encountered stimulus novel stimulus, for stimuli can be represented points in...

10.1017/s0140525x01000061 article EN Behavioral and Brain Sciences 2001-08-01

Algorithms for finding structure in data have become increasingly important both as tools scientific analysis and models of human learning, yet they suffer from a critical limitation. Scientists discover qualitatively new forms observed data: For instance, Linnaeus recognized the hierarchical organization biological species, Mendeleev periodic chemical elements. Analogous insights play pivotal role cognitive development: Children that object category labels can be organized into hierarchies,...

10.1073/pnas.0802631105 article EN Proceedings of the National Academy of Sciences 2008-08-01

In few-shot classification, we are interested in learning algorithms that train a classifier from only handful of labeled examples. Recent progress classification has featured meta-learning, which parameterized model for algorithm is defined and trained on episodes representing different problems, each with small training set its corresponding test set. this work, advance paradigm towards scenario where unlabeled examples also available within episode. We consider two situations: one all...

10.48550/arxiv.1803.00676 preprint EN other-oa arXiv (Cornell University) 2018-01-01

10.1016/j.cogpsych.2005.05.004 article EN Cognitive Psychology 2005-10-06

In a glance, we can perceive whether stack of dishes will topple, branch support child’s weight, grocery bag is poorly packed and liable to tear or crush its contents, tool firmly attached table free be lifted. Such rapid physical inferences are central how people interact with the world each other, yet their computational underpinnings understood. We propose model based on an “intuitive physics engine,” cognitive mechanism similar computer engines that simulate rich in video games graphics,...

10.1073/pnas.1306572110 article EN Proceedings of the National Academy of Sciences 2013-10-21

Human perception and memory are often explained as optimal statistical inferences that informed by accurate prior probabilities. In contrast, cognitive judgments usually viewed following error-prone heuristics insensitive to priors. We examined the optimality of human cognition in a more realistic context than typical laboratory studies, asking people make predictions about duration or extent everyday phenomena such life spans box-office take movies. Our results suggest follow same...

10.1111/j.1467-9280.2006.01780.x article EN Psychological Science 2006-09-01
Coming Soon ...