CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition
Melody
Utterance
DOI:
10.1371/journal.pone.0205943
Publication Date:
2019-04-04T17:29:37Z
AUTHORS (5)
ABSTRACT
Over the past few years, field of visual social cognition and face processing has been dramatically impacted by a series data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In auditory modality, reverse correlation is traditionally used characterize sensory at level spectral or spectro-temporal stimulus properties, but not higher-level cognitive e.g. words, sentences music, lack able manipulate dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, systematically randomize prosody/melody existing speech music recordings. CLEESE works cutting recordings in small successive time segments (e.g. every 100 milliseconds spoken utterance), applying random parametric transformation each segment’s pitch, duration amplitude, using new Python-language implementation phase-vocoder digital audio technique. We here two applications tool generate stimuli studying intonation interrogative vs declarative speech, rhythm sung melodies.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (63)
CITATIONS (22)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....