Martijn Baart

ORCID: 0000-0002-5015-4265
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Multisensory perception and integration
  • Neuroscience and Music Perception
  • Hearing Loss and Rehabilitation
  • Phonetics and Phonology Research
  • Color perception and design
  • Speech and Audio Processing
  • Animal Vocal Communication and Behavior
  • Categorization, perception, and language
  • Neural dynamics and brain function
  • Reading and Literacy Development
  • Neurobiology of Language and Bilingualism
  • Subtitles and Audiovisual Media
  • Neural and Behavioral Psychology Studies
  • Tactile and Sensory Interactions
  • Olfactory and Sensory Function Studies
  • Hearing Impairment and Communication
  • Child and Animal Learning Development
  • Advanced MRI Techniques and Applications
  • Noise Effects and Management
  • EEG and Brain-Computer Interfaces
  • Face Recognition and Perception
  • Child and Adolescent Psychosocial and Emotional Development
  • Sensory Analysis and Statistical Methods
  • Neuroendocrine regulation and behavior
  • Language Development and Disorders

Tilburg University
2013-2025

Basque Center on Cognition, Brain and Language
2013-2022

Lip-reading is crucial for understanding speech in challenging conditions. But how the brain extracts meaning from, silent, visual still under debate. silence activates auditory cortices, but it not known whether such activation reflects immediate synthesis of corresponding stimulus or imagery unrelated sounds. To disentangle these possibilities, we used magnetoencephalography to evaluate cortical activity 28 healthy adult humans (17 females) entrained envelope and lip movements (mouth...

10.1523/jneurosci.1101-19.2019 article EN cc-by-nc-sa Journal of Neuroscience 2019-12-30

Hyperscanning refers to obtaining simultaneous neural recordings from more than one person (Montage et al., 2002 [[1]Montague P.R. Berns G.S. Cohen J.D. McClure S.M. Pagnoni G. Dhamala M. al.Hyperscanning: fMRI during linked social interactions.Neuroimage. 2002; 16: 1159-1164Crossref PubMed Scopus (441) Google Scholar]), that can be used study interactive situations. In particular, hyperscanning with Electroencephalography (EEG) is becoming increasingly popular since it allows researchers...

10.1016/j.mex.2019.02.021 article EN cc-by MethodsX 2019-01-01

One potentially relevant neurophysiological marker of internalizing problems (anxiety/depressive symptoms) is the late positive potential (LPP), as it related to processing emotional stimuli. For first time, our knowledge, we investigated value LPP a for and specific anxiety depressive symptoms, at preschool age. At age 4 years, children (N = 84) passively viewed series neutral, pleasant, unpleasant pictures selected from International Affective Pictures System. picture was measured via (EEG...

10.1016/j.ijpsycho.2020.06.005 article EN cc-by International Journal of Psychophysiology 2020-06-16

Humans' extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations 144 participants (aged 5-27 years) track phrasal and syllabic structures connected mixed different types of noise. While extraction prosodic cues from clear was stable during development, its maintenance a multi-talker background matured rapidly up age...

10.1016/j.dcn.2022.101181 article EN cc-by-nc-nd Developmental Cognitive Neuroscience 2022-11-26

Listeners hearing an ambiguous speech sound flexibly adjust their phonetic categories in accordance with lipread information telling what the phoneme should be (recalibration). Here, we tested stability of lipread-induced recalibration over time. were exposed to halfway between /t/ and /p/ that was dubbed onto a face articulating either or /p/. When immediately, listeners more likely categorize as than This aftereffect dissipated quickly prolonged testing did not reappear after 24-hour...

10.1177/0023830909103178 article EN Language and Speech 2009-06-01

Listeners use lipread information to adjust the phonetic boundary between two speech categories (phonetic recalibration, Bertelson et al. 2003). Here, we examined recalibration while listeners were engaged in a visuospatial or verbal memory working task under different load conditions. Phonetic was--like selective adaptation--not affected by concurrent task. This result indicates that is low-level process not critically depending on processes used verbal- memory.

10.1007/s00221-010-2264-9 article EN cc-by-nc Experimental Brain Research 2010-04-30

Our percept of the world is not solely determined by what we perceive and process at a given moment in time, but also depends on processed recently. In present study, investigate whether perceived emotion spoken sentence contingent upon an auditory stimulus preceding trial (i.e., serial dependence). Thereto, participants were exposed to sentences that varied emotional affect changing prosody ranged from ‘happy’ ‘fearful’. Participants instructed rate emotion. We found positive dependence for...

10.1177/03010066241235562 article EN cc-by Perception 2024-03-14

Perception of vocal affect is influenced by the concurrent sight an emotional face. We demonstrate that face also can induce recalibration affect. Participants were exposed to videos a 'happy' or 'fearful' in combination with slightly incongruous sentence ambiguous prosody. After this exposure, test sentences rated as more when exposure phase contained instead faces. This auditory shift likely reflects induced error minimization inter-sensory discrepancy. In line view, prosody was...

10.1007/s00221-018-5270-y article EN cc-by Experimental Brain Research 2018-04-25

Although infant speech perception in often studied isolated modalities, infants' experience with is largely multimodal (i.e., sounds they hear are accompanied by articulating faces). Across two experiments, we tested sensitivity to the relationship between auditory and visual components of audiovisual their native (English) non-native (Spanish) language. In Experiment 1, looking times were measured during a preferential task which saw simultaneous streams story, one English other Spanish,...

10.1371/journal.pone.0126059 article EN cc-by PLoS ONE 2015-04-30

Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken a measure integration) for was similar obtained with fully congruent stimuli, whereas combinations larger. argue these effects arise because phonetic incongruency solved differently both types stimuli.

10.1111/ejn.13734 article EN cc-by European Journal of Neuroscience 2017-10-04

Abstract The current study investigates how second language auditory word recognition, in early and highly proficient Spanish–Basque (L1-L2) bilinguals, is influenced by crosslinguistic phonological-lexical interactions semantic priming. Phonological overlap between a its translation equivalent (phonological cognate status), relatedness of preceding prime were manipulated. Experiment 1 examined recognition performance noisy listening conditions that introduce high degree uncertainty, whereas...

10.1017/s1366728920000164 article EN Bilingualism Language and Cognition 2020-02-24

Abstract Trust is an aspect critical to human social interaction and research has identified many cues that help in the assimilation of this trait. Two these are pitch voice width-to-height ratio face (fWHR). Additionally, indicated content a spoken sentence itself effect on trustworthiness; finding not yet been brought into multisensory research. The current aims investigate previously developed theories trust relation vocal pitch, fWHR, multimodal setting. Twenty-six female participants...

10.1163/22134808-bja10119 article EN cc-by Multisensory Research 2024-04-03

Speech perception is influenced by vision through a process of audiovisual integration. This demonstrated the McGurk illusion where visual speech (for example /ga/) dubbed with incongruent auditory (such as /ba/) leads to modified percept (/da/). Recent studies have indicated that stimuli used in paradigms involves mechanisms both general and specific mismatch processing modulates induced theta-band (4-8 Hz) oscillations. Here, we investigated whether theta modulation merely reflects or,...

10.1371/journal.pone.0219744 article EN cc-by PLoS ONE 2019-07-16
Coming Soon ...