- Multisensory perception and integration
- Neuroscience and Music Perception
- Hearing Loss and Rehabilitation
- Phonetics and Phonology Research
- Color perception and design
- Speech and Audio Processing
- Animal Vocal Communication and Behavior
- Categorization, perception, and language
- Neural dynamics and brain function
- Reading and Literacy Development
- Neurobiology of Language and Bilingualism
- Subtitles and Audiovisual Media
- Neural and Behavioral Psychology Studies
- Tactile and Sensory Interactions
- Olfactory and Sensory Function Studies
- Hearing Impairment and Communication
- Child and Animal Learning Development
- Advanced MRI Techniques and Applications
- Noise Effects and Management
- EEG and Brain-Computer Interfaces
- Face Recognition and Perception
- Child and Adolescent Psychosocial and Emotional Development
- Sensory Analysis and Statistical Methods
- Neuroendocrine regulation and behavior
- Language Development and Disorders
Tilburg University
2013-2025
Basque Center on Cognition, Brain and Language
2013-2022
Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual still under debate. silence activates auditory cortices, but it not known whether such activation reflects immediate synthesis of corresponding stimulus or imagery unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate cortical activity 28 healthy adult humans (17 females) entrained envelope and lip movements (mouth...
Hyperscanning refers to obtaining simultaneous neural recordings from more than one person (Montage et al., 2002 [[1]Montague P.R. Berns G.S. Cohen J.D. McClure S.M. Pagnoni G. Dhamala M. al.Hyperscanning: fMRI during linked social interactions.Neuroimage. 2002; 16: 1159-1164Crossref PubMed Scopus (441) Google Scholar]), that can be used study interactive situations. In particular, hyperscanning with Electroencephalography (EEG) is becoming increasingly popular since it allows researchers...
One potentially relevant neurophysiological marker of internalizing problems (anxiety/depressive symptoms) is the late positive potential (LPP), as it related to processing emotional stimuli. For first time, our knowledge, we investigated value LPP a for and specific anxiety depressive symptoms, at preschool age. At age 4 years, children (N = 84) passively viewed series neutral, pleasant, unpleasant pictures selected from International Affective Pictures System. picture was measured via (EEG...
Humans' extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations 144 participants (aged 5-27 years) track phrasal and syllabic structures connected mixed different types of noise. While extraction prosodic cues from clear was stable during development, its maintenance a multi-talker background matured rapidly up age...
Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance with lipread information telling what the phoneme should be (recalibration). Here, we tested stability of lipread-induced recalibration over time. were exposed to halfway between /t/ and /p/ that was dubbed onto a face articulating either or /p/. When immediately, listeners more likely categorize as than This aftereffect dissipated quickly prolonged testing did not reappear after 24-hour...
Listeners use lipread information to adjust the phonetic boundary between two speech categories (phonetic recalibration, Bertelson et al. 2003). Here, we examined recalibration while listeners were engaged in a visuospatial or verbal memory working task under different load conditions. Phonetic was--like selective adaptation--not affected by concurrent task. This result indicates that is low-level process not critically depending on processes used verbal- memory.
Our percept of the world is not solely determined by what we perceive and process at a given moment in time, but also depends on processed recently. In present study, investigate whether perceived emotion spoken sentence contingent upon an auditory stimulus preceding trial (i.e., serial dependence). Thereto, participants were exposed to sentences that varied emotional affect changing prosody ranged from ‘happy’ ‘fearful’. Participants instructed rate emotion. We found positive dependence for...
Perception of vocal affect is influenced by the concurrent sight an emotional face. We demonstrate that face also can induce recalibration affect. Participants were exposed to videos a 'happy' or 'fearful' in combination with slightly incongruous sentence ambiguous prosody. After this exposure, test sentences rated as more when exposure phase contained instead faces. This auditory shift likely reflects induced error minimization inter-sensory discrepancy. In line view, prosody was...
Although infant speech perception in often studied isolated modalities, infants' experience with is largely multimodal (i.e., sounds they hear are accompanied by articulating faces). Across two experiments, we tested sensitivity to the relationship between auditory and visual components of audiovisual their native (English) non-native (Spanish) language. In Experiment 1, looking times were measured during a preferential task which saw simultaneous streams story, one English other Spanish,...
Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken a measure integration) for was similar obtained with fully congruent stimuli, whereas combinations larger. argue these effects arise because phonetic incongruency solved differently both types stimuli.
Abstract The current study investigates how second language auditory word recognition, in early and highly proficient Spanish–Basque (L1-L2) bilinguals, is influenced by crosslinguistic phonological-lexical interactions semantic priming. Phonological overlap between a its translation equivalent (phonological cognate status), relatedness of preceding prime were manipulated. Experiment 1 examined recognition performance noisy listening conditions that introduce high degree uncertainty, whereas...
Abstract Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this trait. Two these are pitch voice width-to-height ratio face (fWHR). Additionally, indicated content a spoken sentence itself effect on trustworthiness; finding not yet been brought into multisensory research. The current aims investigate previously developed theories trust relation vocal pitch, fWHR, multimodal setting. Twenty-six female participants...
Speech perception is influenced by vision through a process of audiovisual integration. This demonstrated the McGurk illusion where visual speech (for example /ga/) dubbed with incongruent auditory (such as /ba/) leads to modified percept (/da/). Recent studies have indicated that stimuli used in paradigms involves mechanisms both general and specific mismatch processing modulates induced theta-band (4-8 Hz) oscillations. Here, we investigated whether theta modulation merely reflects or,...