- Phonetics and Phonology Research
- Hearing Loss and Rehabilitation
- Language Development and Disorders
- Neurobiology of Language and Bilingualism
- Linguistic Variation and Morphology
- Speech and dialogue systems
- Multisensory perception and integration
- Reading and Literacy Development
- Neuroscience and Music Perception
- Noise Effects and Management
- Speech Recognition and Synthesis
- Speech and Audio Processing
- Syntax, Semantics, Linguistic Variation
- Hearing Impairment and Communication
- Hearing, Cochlea, Tinnitus, Genetics
- Natural Language Processing Techniques
- Categorization, perception, and language
- Color perception and design
- Language, Discourse, Communication Strategies
- Anxiety, Depression, Psychometrics, Treatment, Cognitive Processes
- Language, Metaphor, and Cognition
- Voice and Speech Disorders
- Language, Communication, and Linguistic Studies
- Stuttering Research and Treatment
- Text Readability and Simplification
University of York
2016-2025
Google (United States)
2009-2023
University of Bristol
2004-2015
University of Plymouth
2015
Hanyang University
2006
University of California, Los Angeles
2006
Stanford University
2006
House Clinic
2000-2002
Johns Hopkins University
1999-2001
At Bristol
2001
This research examines the issue of speech segmentation in 9-month-old infants. Two cues known to carry probabilistic information about word boundaries were investigated: Phonotactic regularity and prosodic pattern. The stimuli used four head turn preference experiments bisyllabic CVC·CVC nonwords bearing primary stress either first or second syllable (strong/weak vs. weak/strong). Stimuli also differed with respect phonotactic nature their cross-syllabic C·C cluster. Clusters had a low...
A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues signal. large body empirical evidence indicates that word segmentation promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account these operate combination or conflict lacking. The present study fills this gap assessing when are systematically pitted against each other. results demonstrate do not...
In this study, the authors examined whether rhythm metrics capable of distinguishing languages with high and low temporal stress contrast also can distinguish among control dysarthric speakers American English perceptually distinct patterns. Methods Acoustic measures vocalic consonantal segment durations were obtained for speech samples from 55 across 5 groups (hypokinetic, hyperkinetic, flaccid-spastic, ataxic dysarthrias, controls). Segment used to calculate standard new metrics....
Acoustic metrics of contrastive speech rhythm, based on vocalic and intervocalic interval durations, are intended to capture stable typological differences between languages. They should consequently be robust variation speakers, sentence materials, measurers. This paper assesses the impact these sources %V (proportion utterance comprised intervals), VarcoV (rate-normalized standard deviation duration), nPVI-V (a measure durational variability successive pairs intervals). Five measurers...
In a study of optical cues to the visual perception stress, three American English talkers spoke words that differed in lexical stress and sentences phrasal while video movements face were recorded. The production stressed unstressed syllables from these utterances was analyzed along many measures facial movement, which generally larger faster condition. experiment, 16 perceivers identified location forced-choice judgments clips (without audio). Phrasal better perceived than stress. relation...
Verbal communication in noisy backgrounds is challenging. Understanding speech background noise that fluctuates intensity over time particularly difficult for hearing-impaired listeners with a sensorineural hearing loss (SNHL). The reduction fast-acting cochlear compression associated SNHL exaggerates the perceived fluctuations amplitude-modulated sounds. SNHL-induced changes coding of sounds may have detrimental effect on ability to understand presence modulated noise. To date, direct...
Eight experiments tested the hypothesis that infants' word segmentation abilities are reducible to familiar sound-pattern parsing regardless of actual boundaries.This was disconfirmed in using headtum preference procedure: 8.5-month-olds did not mis-segment a consonantvowel-consonant (CVC) (e.g., dice) from passages containing corresponding phonemic pattern across boundary (C#VC#; "cold ice"), but they segmented it when really present ("roll dice").However, segment real vowel-consonant (VC)...
Eight experiments tested the hypothesis that infants' word segmentation abilities are reducible to familiar sound-pattern parsing regardless of actual boundaries. This was disconfirmed in using headturn preference procedure: 8.5-month-olds did not mis-segment a consonant-vowel-consonant (CVC) (e.g., dice) from passages containing corresponding phonemic pattern across boundary (C#VC#; "cold ice"), but they segmented it when really present ("roll dice"). However, segment real vowel-consonant...
Previous research suggests that a language learned during early childhood is completely forgotten when contact to severed. In contrast with these findings, we report leftover traces of exposure in individuals their adult years, despite complete absence explicit memory for the language. Specifically, native English under age 40 selectively relearned subtle Hindi or Zulu sound contrasts they once knew. However, over failed show any relearning, and young control participants no previous showed...
Perceiving speech while performing another task is a common challenge in everyday life. How the brain controls resource allocation during perception remains poorly understood. Using functional magnetic resonance imaging (fMRI), we investigated effect of cognitive load on by examining responses participants phoneme discrimination and visual working memory simultaneously. The involved holding either single meaningless image (low load) or four different images (high load). Performing under high...
Purpose: Studies of speech-in-speech listening show that intelligible maskers are more detrimental to target perception than unintelligible maskers, an effect we refer as linguistic interference. Research also shows performance improves over time through adaptation. The extent which the speed adaptation differs for and whether this pattern is reflected in changes effort open questions. Method: In preregistered study, native English listeners transcribed sentences against masker (time-forward...
Although word stress has been hailed as a powerful speech-segmentation cue, the results of 5 cross-modal fragment priming experiments revealed limitations to stress-based segmentation. Specifically, pattern auditory primes failed have any effect on lexical decision latencies related visual targets. A determining factor was whether onset prime coarticulated with preceding speech fragment. Uncoarticulated (i.e., concatenated) facilitated priming. Coarticulated ones did not. However, when were...
It has been posited that the role of prosody in lexical segmentation is elevated when speech signal degraded or unreliable. Using predictions from Cutler and Norris’ [J. Exp. Psychol. Hum. Percept. Perform. 14, 113–121 (1988)] metrical strategy hypothesis as a framework, this investigation examined how individual suprasegmental segmental cues to syllabic stress contribute differentially recognition strong weak syllables for purpose segmentation. Syllabic contrastivity was reduced...