Chen Yu

ORCID: 0000-0002-4310-1923
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Child and Animal Learning Development
  • Language Development and Disorders
  • Speech and dialogue systems
  • Hearing Impairment and Communication
  • Language and cultural evolution
  • Reading and Literacy Development
  • Action Observation and Synchronization
  • Gaze Tracking and Assistive Technology
  • Categorization, perception, and language
  • Social Robot Interaction and HRI
  • Spatial Cognition and Navigation
  • Tactile and Sensory Interactions
  • Child Development and Digital Technology
  • Language, Metaphor, and Cognition
  • EFL/ESL Teaching and Learning
  • Advanced Text Analysis Techniques
  • Multimodal Machine Learning Applications
  • Cognitive and developmental aspects of mathematical skills
  • Neural and Behavioral Psychology Studies
  • Autism Spectrum Disorder Research
  • Natural Language Processing Techniques
  • Time Series Analysis and Forecasting
  • Second Language Learning and Teaching
  • Data Visualization and Analytics
  • Face Recognition and Perception

The University of Texas at Austin
2020-2024

Lanzhou University of Technology
2024

University of Edinburgh
2024

Sichuan University
2023

Indiana University Bloomington
2013-2022

Max Planck Society
2020

Indiana University
2009-2019

Shenzhen University
2019

University of Groningen
2019

Macquarie University
2019

There are an infinite number of possible word-to-word pairings in naturalistic learning environments. Previous proposals to solve this mapping problem have focused on linguistic, social, representational, and attentional constraints at a single moment. This article discusses cross-situational strategy based computing distributional statistics across words, referents, and, most important, the co-occurrences words referents multiple moments. We briefly exposed adults set trials that each...

10.1111/j.1467-9280.2007.01915.x article EN Psychological Science 2007-05-01

10.1016/j.cognition.2012.06.016 article EN Cognition 2012-08-09

The coordination of visual attention among social partners is central to many components human behavior and development. Previous research has focused on one pathway the looking by partners, gaze following. extant evidence shows that even very young infants follow direction another's but they do so only in highly constrained spatial contexts because not a spatially precise cue as target easily used complex interactions. Our findings, derived from moment-to-moment tracking eye one-year-olds...

10.1371/journal.pone.0079659 article EN cc-by PLoS ONE 2013-11-13

We offer a new solution to the unsolved problem of how infants break into word learning based on visual statistics everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 10 month-old at 147 at-home mealtime events were analysed for objects in view. The images found be highly cluttered with many different However, frequency distribution object categories was extremely right skewed such that very small set pervasively present-a fact may substantially reduce...

10.1098/rstb.2016.0055 article EN Philosophical Transactions of the Royal Society B Biological Sciences 2016-11-22

10.3758/s13423-013-0466-4 article EN Psychonomic Bulletin & Review 2013-06-27

10.1016/j.cub.2016.03.026 article EN publisher-specific-oa Current Biology 2016-04-30

Vocabulary differences early in development are highly predictive of later language learning as well achievement school. Early word emerges the context tightly coupled social interactions between learner and a mature partner. In present study, we develop apply novel paradigm-dual head-mounted eye tracking-to record momentary gaze data from both parents infants during free-flowing toy-play contexts. With fine-grained sequential patterns extracted continuous streams, objectively measure joint...

10.1111/desc.12735 article EN Developmental Science 2018-09-26

Parents support and scaffold more mature behaviors in their infants. Recent research suggests that parent-infant joint visual attention may the development of sustained by extending duration an infant's to object. The open question concerns parent occur within joint-attention episodes infant In study, dyads played with objects on a tabletop while eye-gaze was recorded head-mounted eye-trackers. Parent hand contact as well speech were coded analyzed identify presence touch talk during bouts...

10.1037/dev0000628 article EN Developmental Psychology 2018-11-29

Human toddlers learn about objects through second-by-second, minute-by-minute sensory-motor interactions. In an effort to understand how toddlers' bodily actions structure the visual learning environment, mini-video cameras were placed low on foreheads of toddlers, and for comparison also their parents, as they jointly played with toys. Analyses head camera views indicate experiences profoundly different dynamic structures. The toddler view often consists a single dominating object that is...

10.1111/j.1467-7687.2009.00947.x article EN Developmental Science 2010-01-28

We examine the influence of inferring interlocutors' referential intentions from their body movements at early stage lexical acquisition. By testing human participants and comparing performances in different learning conditions, we find that those embodied facilitate both word discovery word-meaning association. In light empirical findings, main part this article presents a computational model can identify sound patterns individual words continuous speech, using nonlinguistic contextual...

10.1207/s15516709cog0000_40 article EN Cognitive Science 2005-11-12

Abstract A key question in early word learning is how children cope with the uncertainty natural naming events. One potential mechanism for reduction cross‐situational – tracking word/object co‐occurrence statistics across But empirical and computational analyses of have made strong assumptions about nature event ambiguity, that been challenged by recent This paper shows from ambiguous events depends on perspective. Natural parent–child interactions were recorded both a third‐person...

10.1111/desc.12036 article EN Developmental Science 2013-03-19

The present article shows that infant and dyad differences in hand-eye coordination predict joint attention (JA). In the study reported here, 51 toddlers ranging age from 11 to 24 months their parents wore head-mounted eye trackers as they played with objects together. We found physically active aligned looking behavior parent achieved a substantial proportion of time spent jointly attending same object. However, JA did not arise through gaze following but rather manual actions on both...

10.1111/cdev.12730 article EN Child Development 2017-02-10

Both adults and young children possess powerful statistical computation capabilities--they can infer the referent of a word from highly ambiguous contexts involving many words referents by aggregating cross-situational information across contexts. This ability has been explained models hypothesis testing associative learning. article describes series simulation studies analyses designed to understand different learning mechanisms posited 2 classes their relation each other. Variants...

10.1037/a0026182 article EN Psychological Review 2012-01-01

Abstract Recent studies show that both adults and young children possess powerful statistical learning capabilities to solve the word‐to‐world mapping problem. However, underlying mechanisms make possible are not yet known. With goal of providing new insights into this issue, research reported in paper used an eye tracker record moment‐by‐moment movement data 14‐month‐old babies tasks. Various measures applied such fine‐grained temporal data, as looking duration shift rate (the number shifts...

10.1111/j.1467-7687.2010.00958.x article EN Developmental Science 2010-03-30

Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used study infants’ young children's visual environments provide new often unexpected insights about the world from a child's point of view. The challenge in using head is principally conceptual concerns match between what these measure research question. Head record scene front faces thus answer questions those head-centered scenes. In this “Tools Trade” article, we consider unique...

10.1080/15248372.2014.933430 article EN Journal of Cognition and Development 2014-09-02

Abstract Joint attention has been extensively studied in the developmental literature because of overwhelming evidence that ability to socially coordinate visual an object is essential healthy outcomes, including language learning. The goal this study was understand complex system sensory‐motor behaviors may underlie establishment joint between parents and toddlers. In experimental task, toddlers played together with multiple toys. We objectively measured attention—and it—using a dual...

10.1111/cogs.12366 article EN publisher-specific-oa Cognitive Science 2016-03-25

Abstract There are an infinite number of possible word-to-world pairings. One way children could learn words at early stage is by computing statistical regularities across different modalities—pairing spoken with referents in the co-occurring extralinguistic environment, collecting a such pairs, and then figuring out common elements. This paper provides computational evidence that mechanism for object name learning. Moreover, young much more effectively efficiently later stages. Could...

10.1080/15475440701739353 article EN Language Learning and Development 2008-01-07

Object names are a major component of early vocabularies and learning object depends on being able to visually recognize objects in the world. However, fundamental visual challenge moment-to-moment variations appearances that learners must resolve has received little attention word research. Here we provide first evidence image-level variability matters may be link connects infant manipulation vocabulary development. Using head-mounted eye tracking, present study objectively measured...

10.1111/desc.12816 article EN publisher-specific-oa Developmental Science 2019-02-16
Coming Soon ...