- Phonetics and Phonology Research
- Language Development and Disorders
- Hearing Impairment and Communication
- Hearing Loss and Rehabilitation
- Neurobiology of Language and Bilingualism
- Voice and Speech Disorders
- Reading and Literacy Development
- Multisensory perception and integration
- Stuttering Research and Treatment
- Speech and Audio Processing
- Action Observation and Synchronization
- Hearing, Cochlea, Tinnitus, Genetics
- Knowledge Societies in the 21st Century
- Advanced Neuroimaging Techniques and Applications
- Cleft Lip and Palate Research
- Emotion and Mood Recognition
- Engineering and Information Technology
- Literacy and Educational Practices
- Advanced Algebra and Logic
- Neuroscience and Music Perception
- linguistics and terminology studies
- EEG and Brain-Computer Interfaces
- Attention Deficit Hyperactivity Disorder
- Developmental and Educational Neuropsychology
- Spatial Neglect and Hemispheric Dysfunction
Universidad de Málaga
2015-2025
Instituto de Investigación Biomédica de Málaga
2017
In the last 50 years, researchers have debated over lexical or grammatical nature of children's early multiword utterances. Due to methodological limitations, issue remains controversial. This corpus study explores effect grammatical, lexical, and pragmatic categories on mean length utterances ( MLU ). A total 312 speech samples from high‐low socioeconomic status SES ) French‐speaking children aged 2–4 years were annotated with a part‐of‐speech‐tagger. Multiple regression analyses show that...
This paper describes early language development in a deaf Spanish child fitted with cochlear implant (CI) when she was 1;6 years old. The girl had been exposed to Cued Speech (CS) since that age. main aim of the research identify potential areas slow as well benefit CI and CS. At beginning this 2;6 (she using for 12 months). Adult–child 30‐minute sessions were videotaped every week 1 year (13–24 months use), transcribed according CHAT norms. Measures phonemic inventory, intelligibility,...
Donepezil (DP), a cognitive-enhancing drug targeting the cholinergic system, combined with massed sentence repetition training augmented and speeded up recovery of speech production deficits in patients chronic conduction aphasia extensive left hemisphere infarctions (Berthier et al., 2014). Nevertheless, still unsettled question is whether such improvements correlate restorative structural changes gray matter white pathways mediating production. In present study, we used pharmacological...
Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional (e.g., from Parisian Alsatian accent), stronger accent, re-emergence a previously learned and dormant accent. Here, we report loss after rapidly regressive Broca's aphasia three Argentinean patients who had suffered unilateral bilateral focal lesions network. All were monolingual speakers different native...
Foreign accent syndrome (FAS) is a speech disorder that defined by the emergence of peculiar manner articulation and intonation which perceived as foreign. In most cases acquired FAS (AFAS) new secondary to small focal lesions involving components bilaterally distributed neural network for production. past few years has also been described in different psychiatric conditions (conversion disorder, bipolar schizophrenia) well developmental disorders (specific language impairment, apraxia...
This study aims at investigating the impact of cochlear implant (CI) use for phonological development. The main participants were a group 14 deaf children who had received their CIs in second year life, and been wearing them 24 months. A normally hearing (NH) aged months old was also evaluated. Data obtained from non-word repetition (NWR) task. Various segmental suprasegmental measures NWR data. CI scored significantly below controls one feature (i.e. place articulation) segment...
This paper studies the linguistic input attended by a deaf child exposed to cued speech (CS) in final part of her prelinguistic period (18–24 months). Subjects are child, mother, and therapist. Analyses have provided data about quantity directed (oral input, more than 1,000 words per half-an-hour session; ratio, 60% oral input; 55% input), its quality (lexical variety, grammatical complexity, etc.), other properties interaction (child attention use spontaneous gestures). Results show that...
It has been proposed that cochlear implant users may develop robust categorical perception skills, but they show limited precision in perception. This article explores if a parallel contrast is observable production, and if, despite acquiring typical linguistic representations, their early words are inconsistent. The participants were eight Spanish-learning deaf children implanted before second birthday. Two studies examined the transition from babbling to words, one-word period. Study 1...
Knowledge on the patterns of repetition amongst individuals who develop language deficits in association with right hemisphere lesions (crossed aphasia) is very limited. Available data indicate that some crossed aphasics experiencing phonological processing not heavily influenced by lexical-semantic variables (lexicality, imageability, and frequency) as regularly reported phonologically-impaired cases left damage. Moreover, view fact aphasia rare, information role cortical areas white matter...
The acquisition and evolution of speech production, discourse communication can be negatively impacted by brain malformations. We describe, for the first time, a case developmental dynamic dysphasia (DDD) in right-handed adolescent boy (subject D) with cortical malformations involving language-eloquent regions (inferior frontal gyrus) both left right hemispheres. Language evaluation revealed markedly reduced verbal output affecting phonemic semantic fluency, phrase sentence generation...
This paper presents the results of a closed-set recognition task for 80 Spanish consonant-vowel sounds (16 C × 5 V, spoken by 2 talkers) in 8-talker babble (–6, –2, +2 dB). A ranking resistance to noise was obtained using signal detection d′ measure, and confusion patterns were analyzed graphical method (confusion graphs). The resulting indicated existence three groups: (1) high resistance: /ʧ, s, ʝ/; (2) mid /r, l, m, n/; (3) low /t, θ, x, ɡ, b, d, k, f, p/. Confusions involved mostly place...
This is the first study to explore lexical and grammatical development in a deaf child diagnosed with Attention Deficit Hyperactivity Disorder, Inattentive sub-type (ADHD/I). The child, whose family language was Spanish, fitted cochlear implant (CI) when she 18 months old. ADHD/I, for which prescribed medication, 3;6 years later. Speech samples were videotaped over 4 of CI use during follow-up session 1 year Samples transcribed according CHAT conventions several measures expressive obtained....
Emotive speech is a non-invasive and cost-effective biomarker in wide spectrum of neurological disorders with computational systems built to automate the diagnosis. In order explore possibilities for automation routine analysis presence hard learn pathology patterns, we propose framework assess level competence paralinguistic communication. Initially, assessment relies on perceptual experiment completed by human listeners, model called Aggregated Ear has been proposed that draws conclusion...
Nasalance is a valuable clinical biomarker for hypernasality. It computed as the ratio of acoustic energy emitted through nose to total mouth and (eNasalance). A new approach proposed compute nasalance using Convolutional Neural Networks (CNNs) trained with Mel-Frequency Cepstrum Coefficients (mfccNasalance). mfccNasalance evaluated by examining its accuracy: 1) when train test data are from same or different dialects; 2) that differs in dynamicity (e.g. rapidly produced diadochokinetic...
Automatic tools to detect hypernasality have been traditionally designed analyze sustained vowels exclusively. This is in sharp contrast with clinical recommendations, which consider it necessary use a variety of utterance types (e.g., repeated syllables, sounds, sentences, etc.) study explores the feasibility detecting automatically based on speech samples other than vowels. The participants were 39 patients and healthy controls. Six utterances used: counting 1-to-10 repetition syllable...
Automatic evaluation of hypernasality has been traditionally computed using monophonic signals (i.e., combining nose and mouth signals). Here, this study aimed to examine if serve increase the accuracy evaluation. Using a conventional microphone Nasometer, we recorded monophonic, mouth, signals. Three main analyses were performed: (1) comparing spectral distance between oral/nasalized vowels in nose, signals; (2) assessing Deep Neural Network (DNN) models classifying oral/nasal sounds...