- Neuroscience and Music Perception
- Neural dynamics and brain function
- Neurobiology of Language and Bilingualism
- Hearing Loss and Rehabilitation
- Stuttering Research and Treatment
- Phonetics and Phonology Research
- Animal Vocal Communication and Behavior
- Action Observation and Synchronization
- Music Technology and Sound Studies
- Human-Animal Interaction Studies
- Diverse Music Education Insights
- Music and Audio Processing
- Hearing Impairment and Communication
- Multisensory perception and integration
- Animal Behavior and Welfare Studies
- Cognitive Science and Education Research
University of York
2025
University College London
2014-2022
Newcastle University
2020-2022
National Hospital for Neurology and Neurosurgery
2016
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly perceptual experiences. The functional networks supporting imagery have been described, but less is known about the systems associated with interindividual differences in imagery. Combining voxel-based morphometry and fMRI, we examined structural basis how are subjectively perceived, explored associations between imagery, sensory-based processing, visual Vividness correlated gray matter...
Purpose: Talking in unison with a partner, otherwise known as choral speech, reliably induces fluency people who stutter (PWS). This effect may arise because speech addresses hypothesized motor timing deficit by giving PWS an external rhythm to align and scaffold their utterances onto. study tested this theory comparing the of do not assess whether both groups change similar ways when talking chorally. Method: Twenty adults 20 neurotypical controls read passage on own then second chorally...
When talkers speak in masking sounds, their speech undergoes a variety of acoustic and phonetic changes. These changes are known collectively as the Lombard effect. Most behavioural research neuroimaging this area has concentrated on effect energetic maskers such white noise speech. Previous fMRI studies have argued that neural responses to speaking driven by quality auditory feedback—that is, audibility speaker's voice over masker. However, we also frequently produce presence informational...
Lind, Hall, Breidegard, Balkenius, and Johansson (2014a, 2014b) recently published articles tackling a core question concerning speech production: At which stage of processing are communicative intentions specified? Taking position contrary to dominant models production (e.g., Levelt, 2001), they suggested that utterances “often semantically underspecified” (Lind, & Johansson, 2014a, p. 8) before articulation, “auditory feedback” (Lind et al., is an important mechanism for specifying the...
GENERAL COMMENTARY article Front. Hum. Neurosci., 16 December 2014Sec. Cognitive Neuroscience Volume 8 - 2014 | https://doi.org/10.3389/fnhum.2014.00964
Abstract This study tested the idea that stuttering is caused by over-reliance on auditory feedback. The theory motivated observation many fluency-inducing situations, such as synchronised speech and masked speech, alter or obscure talker’s Typical speakers show ‘speaking-induced suppression’ of neural activation in superior temporal gyrus (STG) during self-produced vocalisation, compared to listening recorded speech. If people who stutter over-attend feedback, they may lack this suppression...
When we have spoken conversations, it is usually in the context of competing sounds within our environment. Speech can be masked by many different kinds sounds, for example, machinery noise and speech others, these place differing demands on cognitive resources. In this talk, I will present data from a series functional magnetic resonance imaging (fMRI) studies which informational properties background been manipulated to make them more or less similar speech. demonstrate neural effects...
Imagine you are at party with loud music playing. What would it be like trying to speak your friend in all that noise? Scientists call background noise this “masking sound” because covers up other sounds, two ways. The sound might so blocks out noises, or contain information is distracting. Maybe favorite song and cannot help singing along! Which of these do think affects most when talk? We decided find by putting people a brain scanner asking them talk while we played various noises the...
Previous research has shown that human adults can easily discriminate two individual zebra finches (Taeniopygia guttata) by their signature songs, struggle to rhesus monkeys (Macaca mulatta) calls, and are unable dogs (Canis familiaris) barks. The purpose of the present experiment was examine whether acoustic discrimination non-primate heterospecifics is limited species producing stereotyped or it possible with vocalizations other as well. This tested here calls large-billed crows (Corvus...