Sona Patel

ORCID: 0000-0002-0973-9528
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Speech and Audio Processing
  • Neuroscience and Music Perception
  • Phonetics and Phonology Research
  • Voice and Speech Disorders
  • Speech Recognition and Synthesis
  • Stuttering Research and Treatment
  • Hearing Loss and Rehabilitation
  • Traumatic Brain Injury Research
  • Neurobiology of Language and Bilingualism
  • Neural and Behavioral Psychology Studies
  • Assistive Technology in Communication and Mobility
  • Music and Audio Processing
  • Interpreting and Communication in Healthcare
  • Ultrasonics and Acoustic Wave Propagation
  • Emergency and Acute Care Studies
  • EEG and Brain-Computer Interfaces
  • Phonocardiography and Auscultation Techniques
  • Motor Control and Adaptation
  • Emotion and Mood Recognition
  • Animal Vocal Communication and Behavior
  • Free Will and Agency
  • Media, Gender, and Advertising
  • Advanced Adaptive Filtering Techniques
  • Lower Extremity Biomechanics and Pathologies
  • Neurological disorders and treatments

Seton Hall University
2013-2024

Northwestern University
2013-2023

Center for Discovery
2023

Pramukhswami Medical College
2021

Instituto do Sono
2018

New Jersey Institute of Technology
2016

University of Geneva
2010-2014

University of Florida
2006-2012

Google (United States)
2007-2012

Emotions have strong effects on the voice production mechanisms and consequently characteristics. The magnitude of these effects, measured using source parameters, interdependencies among parameters not been examined. To better understand relationships, characteristics were analyzed in 10 actors' productions a sustained/a/vowel five emotions. Twelve acoustic studied grouped according to their physiological backgrounds, three related subglottal pressure, transglottal airflow waveform derived...

10.1109/t-affc.2011.14 article EN IEEE Transactions on Affective Computing 2011-06-17

Impaired expression of emotion through pitch, loudness, rate, and rhythm speech (affective prosody) is common disabling after right hemisphere (RH) stroke. These deficits impede all social interactions. Previous studies have identified cortical areas associated with impairments expression, recognition, or repetition affective prosody, but not critical white matter tracts. We hypothesized that: 1) differences across patients in specific acoustic features correlate listener judgment prosody...

10.3389/fneur.2018.00224 article EN cc-by Frontiers in Neurology 2018-04-06

Although the neural basis for perception of vocal emotions has been described extensively, expression is almost unknown. Here, we asked participants both to repeat and express high-arousing angry vocalizations command (i.e., evoked expressions). First, repeated expressions elicited activity in left middle superior temporal gyrus (STG), pointing a short auditory memory trace repetition expressions. Evoked activated hippocampus, suggesting retrieval long-term stored scripts. Secondly, compared...

10.1093/cercor/bhu074 article EN Cerebral Cortex 2014-04-15

Previous research has shown that vocal errors can be simulated using a pitch perturbation technique. Two types of responses are observed when subjects asked to ignore changes in during steady vowel production, compensatory response countering the direction perceived change and following same as perturbation. The present study investigated nature these by asking volitionally their voice fundamental frequency either opposite ("opposing" group) or ("following" shifts (±100 cents, 1000 ms)...

10.1121/1.4870490 article EN The Journal of the Acoustical Society of America 2014-05-01

No AccessPerspectives on Voice and DisordersArticle1 Jul 2007Perception of Dysphonic Vocal Quality: Some Thoughts Research Update Sona Patel, Rahul Shrivastav Patel Department Communication Sciences Disorders, University FloridaGainesville, FL Google Scholar https://doi.org/10.1044/vvd17.2.3 SectionsAboutFull TextPDF ToolsAdd to favoritesDownload CitationTrack Citations ShareFacebookTwitterLinked In References Camacho, A., & Shrivastav, R. (2006). Predicting changes in breathiness resulting...

10.1044/vvd17.2.3 article EN Perspectives on Voice and Voice Disorders 2007-07-01

In this experiment, a single comparison stimulus was developed as reference in perceptual matching task for the quantification of breathy voice quality. Perceptual judgments set synthetic samples were compared to previous data obtained using multiple stimuli "customized" different voices (Patel, Shrivastav, & Eddins, 2010).Five male and 5 female vowel /a/ selected from Kay Elemetrics Disordered Voice Database resynthesized Klatt synthesizer. Eleven created each base by manipulating...

10.1044/1092-4388(2011/10-0337) article EN Journal of Speech Language and Hearing Research 2012-01-04

The perception of breathiness in vowels is cued by multiple acoustic cues, including changes aspiration noise (AH) and the open quotient (OQ) [Klatt Klatt, J. Acoust. Soc. Am. 87(2), 820–857 (1990)]. A loudness model can be used to determine extent which AH masks harmonic components voice. resulting “partial loudness” (PL) [“noise (NL)] have been shown good predictors perceived [Shrivastav Sapienza, 114(1), 2217–2224 (2003)]. levels OQ were systematically manipulated for ten synthetic...

10.1121/1.3543993 article EN The Journal of the Acoustical Society of America 2011-03-01

Purpose Perceptual estimates of voice quality obtained using rating scales are subject to contextual biases that influence how individuals assign numbers estimate the magnitude vocal quality. Because commonly used in clinical settings, assessments also limitations these scales. Instead, a matching task can be obtain objective measures quality, thereby facilitating model development and tools for use. Method Twenty-seven participated or at least 1 3 tests (named after their modulation...

10.1044/1092-4388(2012/11-0160) article EN Journal of Speech Language and Hearing Research 2012-02-24

Acoustic models of emotions may benefit from considering the underlying voice production mechanism.This study sought to describe emotional expressions according physiological variations measured inverse-filtered glottal waveform in addition standard parameter extraction.An acoustic analysis was performed on a subset /a/ vowels within GEMEP database (10 speakers, 5 emotions).Of 12 features computed, repeated measures ANOVA showed significant main effects for 11 parameters.Subsequent principal...

10.21437/speechprosody.2010-239 article EN Speech prosody 2010-05-10

Research has shown that people who are instructed to volitionally respond pitch-shifted feedback either produce responses follow the shift direction with a short latency of 100–200 ms or oppose longer latencies 300–400 ms. This difference in response prompted comparison three groups vocalists differing abilities, non-trained English-speaking subjects, Mandarin-speaking and trained singers. All subjects produced following long opposing responses, most cases were preceded by shorter response....

10.1121/1.5134769 article EN The Journal of the Acoustical Society of America 2019-12-01

The pitch perturbation technique is a validated that has been used for over 30 years to understand how people control their voice. This involves altering person's voice in real-time while they produce vowel (commonly, prolonged /a/ sound). Although post-task changes the have observed several studies (e.g., change mean fo across duration of experiment), potential using as training tool regulation and/or modification not explored. present study examined event related potentials (ERPs) and...

10.1371/journal.pone.0269326 article EN cc-by PLoS ONE 2023-01-20

Alterations in speech have long been identified as indicators of various neurologic conditions including traumatic brain injury, neurodegenerative diseases, and stroke. The extent to which errors occur milder injuries, such sports-related concussions, is unknown. present study examined error rates student athletes after a concussion compared pre-injury performance order determine the presence relevant characteristics changes production this less easily detected condition.A within-subjects...

10.3389/fpsyg.2023.1135441 article EN cc-by Frontiers in Psychology 2023-03-07

Models of emotional prosody based on perception have typically required listeners to rate expressions according the psychological dimensions (arousal, valence, and power). We propose a perception-based model without assuming that are those used by differentiate prosody. Instead, multidimensional scaling is identify three perceptual dimensions, which then regressed onto dynamic feature set does not require training or normalization speaker’s “neutral” expression. The predictions for...

10.21437/interspeech.2011-740 article EN Interspeech 2022 2011-08-27

Parkinson’s disease often results in a hypokinetic dysarthria, causing well-known effects on speech and voice. Speech from individuals with is typically studied through listening tasks (i.e., by family members naïve listeners) known to affect intelligibility. However, the exact changes degradations quality have not been defined. Previous investigations focused standard measures of mean fundamental frequency, intensity, rate. We hypothesize that are present at prosodic level addition acoustic...

10.1121/1.4971102 article EN The Journal of the Acoustical Society of America 2016-10-01

Abstract Individuals with Parkinson's disease (PD) exhibit a variety of impairments in nonmotor symptoms including emotional processing and cognitive control that have implications for speech production. The present study sought to investigate whether individuals PD impact sentence production as indicated by changes rate. Thirty-six (20 PD, 16 healthy controls) completed subtests 8A 8B the Florida Emotional Expressive Battery (FEEB) elicit samples five different tones (happy, sad, angry,...

10.1055/s-0044-1788767 article EN Seminars in Speech and Language 2024-07-31
Coming Soon ...