Ignacio Moreno‐Torres

ORCID: 0000-0002-2649-7145
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Phonetics and Phonology Research
  • Language Development and Disorders
  • Hearing Impairment and Communication
  • Hearing Loss and Rehabilitation
  • Neurobiology of Language and Bilingualism
  • Voice and Speech Disorders
  • Reading and Literacy Development
  • Multisensory perception and integration
  • Stuttering Research and Treatment
  • Speech and Audio Processing
  • Action Observation and Synchronization
  • Hearing, Cochlea, Tinnitus, Genetics
  • Knowledge Societies in the 21st Century
  • Advanced Neuroimaging Techniques and Applications
  • Cleft Lip and Palate Research
  • Emotion and Mood Recognition
  • Engineering and Information Technology
  • Literacy and Educational Practices
  • Advanced Algebra and Logic
  • Neuroscience and Music Perception
  • linguistics and terminology studies
  • EEG and Brain-Computer Interfaces
  • Attention Deficit Hyperactivity Disorder
  • Developmental and Educational Neuropsychology
  • Spatial Neglect and Hemispheric Dysfunction

Universidad de Málaga
2015-2025

Instituto de Investigación Biomédica de Málaga
2017

In the last 50 years, researchers have debated over lexical or grammatical nature of children's early multiword utterances. Due to methodological limitations, issue remains controversial. This corpus study explores effect grammatical, lexical, and pragmatic categories on mean length utterances ( MLU ). A total 312 speech samples from high‐low socioeconomic status SES ) French‐speaking children aged 2–4 years were annotated with a part‐of‐speech‐tagger. Multiple regression analyses show that...

10.1111/j.1467-8624.2012.01873.x article EN Child Development 2012-10-17

This paper describes early language development in a deaf Spanish child fitted with cochlear implant (CI) when she was 1;6 years old. The girl had been exposed to Cued Speech (CS) since that age. main aim of the research identify potential areas slow as well benefit CI and CS. At beginning this 2;6 (she using for 12 months). Adult–child 30‐minute sessions were videotaped every week 1 year (13–24 months use), transcribed according CHAT norms. Measures phonemic inventory, intelligibility,...

10.1080/02699200801899145 article EN Clinical Linguistics & Phonetics 2008-01-01

Donepezil (DP), a cognitive-enhancing drug targeting the cholinergic system, combined with massed sentence repetition training augmented and speeded up recovery of speech production deficits in patients chronic conduction aphasia extensive left hemisphere infarctions (Berthier et al., 2014). Nevertheless, still unsettled question is whether such improvements correlate restorative structural changes gray matter white pathways mediating production. In present study, we used pharmacological...

10.3389/fnhum.2017.00304 article EN cc-by Frontiers in Human Neuroscience 2017-06-14

Lesion-symptom mapping studies reveal that selective damage to one or more components of the speech production network can be associated with foreign accent syndrome, changes in regional (e.g., from Parisian Alsatian accent), stronger accent, re-emergence a previously learned and dormant accent. Here, we report loss after rapidly regressive Broca's aphasia three Argentinean patients who had suffered unilateral bilateral focal lesions network. All were monolingual speakers different native...

10.3389/fnhum.2015.00610 article EN cc-by Frontiers in Human Neuroscience 2015-11-05

Foreign accent syndrome (FAS) is a speech disorder that defined by the emergence of peculiar manner articulation and intonation which perceived as foreign. In most cases acquired FAS (AFAS) new secondary to small focal lesions involving components bilaterally distributed neural network for production. past few years has also been described in different psychiatric conditions (conversion disorder, bipolar schizophrenia) well developmental disorders (specific language impairment, apraxia...

10.3389/fnhum.2016.00399 article EN cc-by Frontiers in Human Neuroscience 2016-08-09

This study aims at investigating the impact of cochlear implant (CI) use for phonological development. The main participants were a group 14 deaf children who had received their CIs in second year life, and been wearing them 24 months. A normally hearing (NH) aged months old was also evaluated. Data obtained from non-word repetition (NWR) task. Various segmental suprasegmental measures NWR data. CI scored significantly below controls one feature (i.e. place articulation) segment...

10.1016/j.jneuroling.2014.04.002 article EN cc-by-nc-nd Journal of Neurolinguistics 2014-05-16

This paper studies the linguistic input attended by a deaf child exposed to cued speech (CS) in final part of her prelinguistic period (18–24 months). Subjects are child, mother, and therapist. Analyses have provided data about quantity directed (oral input, more than 1,000 words per half-an-hour session; ratio, 60% oral input; 55% input), its quality (lexical variety, grammatical complexity, etc.), other properties interaction (child attention use spontaneous gestures). Results show that...

10.1093/deafed/enl006 article EN The Journal of Deaf Studies and Deaf Education 2006-06-15

It has been proposed that cochlear implant users may develop robust categorical perception skills, but they show limited precision in perception. This article explores if a parallel contrast is observable production, and if, despite acquiring typical linguistic representations, their early words are inconsistent. The participants were eight Spanish-learning deaf children implanted before second birthday. Two studies examined the transition from babbling to words, one-word period. Study 1...

10.1017/s0305000913000056 article EN Journal of Child Language 2013-03-22

Knowledge on the patterns of repetition amongst individuals who develop language deficits in association with right hemisphere lesions (crossed aphasia) is very limited. Available data indicate that some crossed aphasics experiencing phonological processing not heavily influenced by lexical-semantic variables (lexicality, imageability, and frequency) as regularly reported phonologically-impaired cases left damage. Moreover, view fact aphasia rare, information role cortical areas white matter...

10.3389/fnhum.2013.00675 article EN cc-by Frontiers in Human Neuroscience 2013-01-01

The acquisition and evolution of speech production, discourse communication can be negatively impacted by brain malformations. We describe, for the first time, a case developmental dynamic dysphasia (DDD) in right-handed adolescent boy (subject D) with cortical malformations involving language-eloquent regions (inferior frontal gyrus) both left right hemispheres. Language evaluation revealed markedly reduced verbal output affecting phonemic semantic fluency, phrase sentence generation...

10.3389/fnhum.2020.00073 article EN cc-by Frontiers in Human Neuroscience 2020-03-24

This paper presents the results of a closed-set recognition task for 80 Spanish consonant-vowel sounds (16 C × 5 V, spoken by 2 talkers) in 8-talker babble (–6, –2, +2 dB). A ranking resistance to noise was obtained using signal detection d′ measure, and confusion patterns were analyzed graphical method (confusion graphs). The resulting indicated existence three groups: (1) high resistance: /ʧ, s, ʝ/; (2) mid /r, l, m, n/; (3) low /t, θ, x, ɡ, b, d, k, f, p/. Confusions involved mostly place...

10.1121/1.4982251 article EN The Journal of the Acoustical Society of America 2017-05-01

This is the first study to explore lexical and grammatical development in a deaf child diagnosed with Attention Deficit Hyperactivity Disorder, Inattentive sub-type (ADHD/I). The child, whose family language was Spanish, fitted cochlear implant (CI) when she 18 months old. ADHD/I, for which prescribed medication, 3;6 years later. Speech samples were videotaped over 4 of CI use during follow-up session 1 year Samples transcribed according CHAT conventions several measures expressive obtained....

10.3109/02699206.2010.488782 article EN Clinical Linguistics & Phonetics 2010-07-20

Emotive speech is a non-invasive and cost-effective biomarker in wide spectrum of neurological disorders with computational systems built to automate the diagnosis. In order explore possibilities for automation routine analysis presence hard learn pathology patterns, we propose framework assess level competence paralinguistic communication. Initially, assessment relies on perceptual experiment completed by human listeners, model called Aggregated Ear has been proposed that draws conclusion...

10.1109/taffc.2019.2908365 article EN IEEE Transactions on Affective Computing 2019-04-01

10.1016/s0214-4603(09)70024-5 article ES Revista de Logopedia Foniatría y Audiología 2009-01-01

Nasalance is a valuable clinical biomarker for hypernasality. It computed as the ratio of acoustic energy emitted through nose to total mouth and (eNasalance). A new approach proposed compute nasalance using Convolutional Neural Networks (CNNs) trained with Mel-Frequency Cepstrum Coefficients (mfccNasalance). mfccNasalance evaluated by examining its accuracy: 1) when train test data are from same or different dialects; 2) that differs in dynamicity (e.g. rapidly produced diadochokinetic...

10.1371/journal.pone.0315452 article EN cc-by PLoS ONE 2024-12-31

Automatic tools to detect hypernasality have been traditionally designed analyze sustained vowels exclusively. This is in sharp contrast with clinical recommendations, which consider it necessary use a variety of utterance types (e.g., repeated syllables, sounds, sentences, etc.) study explores the feasibility detecting automatically based on speech samples other than vowels. The participants were 39 patients and healthy controls. Six utterances used: counting 1-to-10 repetition syllable...

10.3390/app11198809 article EN cc-by Applied Sciences 2021-09-22

Automatic evaluation of hypernasality has been traditionally computed using monophonic signals (i.e., combining nose and mouth signals). Here, this study aimed to examine if serve increase the accuracy evaluation. Using a conventional microphone Nasometer, we recorded monophonic, mouth, signals. Three main analyses were performed: (1) comparing spectral distance between oral/nasalized vowels in nose, signals; (2) assessing Deep Neural Network (DNN) models classifying oral/nasal sounds...

10.3390/app132312606 article EN cc-by Applied Sciences 2023-11-23
Coming Soon ...