Encoding of Natural Sounds at Multiple Spectral and Temporal Resolutions in the Human Auditory Cortex
Spectrogram
Natural sounds
Tonotopy
Temporal resolution
Auditory System
Temporal cortex
Auditory imagery
Computational model
Auditory scene analysis
Human brain
DOI:
10.1371/journal.pcbi.1003412
Publication Date:
2014-01-02T16:21:05Z
AUTHORS (7)
ABSTRACT
Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in brain. The computational representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 7 Tesla) functional magnetic resonance imaging (fMRI) with modeling to reveal how are represented We compare competing models sound representations select model most accurately predicts fMRI sounds. Our results show cortical encoding entails formation multiple spectrograms different degrees spectral temporal resolution. cortex derives multi-resolution through frequency-specific neural processing channels combined analysis modulations spectrogram. Furthermore, our findings suggest a spectral-temporal trade-off may govern modulation tuning neuronal populations throughout auditory cortex. Specifically, posterior/dorsal regions preferably encode coarse information precision. Vice-versa, anterior/ventral fine-grained low propose such be crucially relevant for flexible behaviorally-relevant constitute one underpinnings specialization
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (60)
CITATIONS (197)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....