- Neuroscience and Music Perception
- Multisensory perception and integration
- Neural dynamics and brain function
- Hearing Loss and Rehabilitation
- Functional Brain Connectivity Studies
- EEG and Brain-Computer Interfaces
- Neural and Behavioral Psychology Studies
- Action Observation and Synchronization
- Color perception and design
- Face Recognition and Perception
- Neural Networks and Applications
- Tactile and Sensory Interactions
- Olfactory and Sensory Function Studies
- Music and Audio Processing
- Neurobiology of Language and Bilingualism
- Advanced Neuroimaging Techniques and Applications
- Motor Control and Adaptation
- Phonetics and Phonology Research
- Visual perception and processing mechanisms
- Technology and Human Factors in Education and Health
- Psychology of Moral and Emotional Judgment
- Autism Spectrum Disorder Research
- Cultural Differences and Values
- Face recognition and analysis
- Advanced MRI Techniques and Applications
Aalto University
2016-2025
Espoo Music Institute
1993-2020
MIND Research Institute
2020
University of Helsinki
1987-2019
FishBase Information and Research Group
2017
Palo Alto University
2017
Turku PET Centre
2014
Helsinki Institute of Physics
1997-2009
Tampere University of Applied Sciences
1989-2008
Finland University
2008
Musical training is known to modify cortical organization. Here, we show that such modifications extend subcortical sensory structures and generalize processing of speech. Musicians had earlier larger brainstem responses than nonmusician controls both speech music stimuli presented in auditory audiovisual conditions, evident as early 10 ms after acoustic onset. Phase-locking stimulus periodicity, which likely underlies perception pitch, was enhanced musicians strongly correlated with length...
Sharing others’ emotional states may facilitate understanding their intentions and actions. Here we show that networks of brain areas “tick together” in participants who are viewing similar events a movie. Participants’ activity was measured with functional MRI while they watched movies depicting unpleasant, neutral, pleasant emotions. After scanning, the again continuously rated experience pleasantness–unpleasantness (i.e., valence) arousal–calmness. Pearson’s correlation coefficient used...
Abstract Multichannel neuromagnetic recordings were used to differentiate signals from the human first (SI) and second (SII) somatosensory cortices define representations of body surface in them. The responses contralateral SI, peaking at 20 – 40 ms, arose mainly area 3b, where leg, hand, fingers, lips tongue agreed with earlier animal studies neurosurgical stimulations on convexial cortex man. Representations five fingers limited a cortical strip ∼2 cm length. Responses SII peaked 100 140...
Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns 6 (disgust, fear, happiness, sadness, anger, surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified both methods, the classification generalized from one induction condition another...
Abstract Neuromagnetic responses were recorded to frequent "standard tones of l000 Hz and infrequent 1100-Hz "deviant" with a 24-channel planar SQUID gradiometer. Stimuli presented at constant interstimulus intervals (ISIs) ranging from 0.75 12 sec. The standards evoked prominent 100-msec response, N100m, which increased in amplitude increasing ISI. N100m could be dissociated into two subcomponents different source areas. posterior component, N100m(2), when the ISI grew up 6 sec, whereas...
Human neuroimaging studies suggest that localization and identification of relevant auditory objects are accomplished via parallel parietal-to-lateral-prefrontal “where” anterior-temporal-to-inferior-frontal “what” pathways, respectively. Using combined hemodynamic (functional MRI) electromagnetic (magnetoencephalography) measurements, we investigated whether such dual pathways exist already in the human nonprimary cortex, as suggested by animal models, selective attention facilitates sound...
Despite the abundant data on brain networks processing static social signals, such as pictures of faces, neural systems supporting perception in naturalistic conditions are still poorly understood. Here we delineated subserving under 19 healthy humans who watched, during 3-T functional magnetic resonance imaging (fMRI), a set 137 short (approximately 16 s each, total 27 min) audiovisual movie clips depicting pre-selected signals. Two independent raters estimated how well each clip...
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented auditory and auditory-related regions of the macaque brain. We found clusters active voxels along ascending pathway that responded to various types complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), core, belt, parabelt cortex, other parts superior temporal gyrus (STG) sulcus (STS). Regions sensitive monkey calls were most prevalent...
The size of human social networks significantly exceeds the network that can be maintained by grooming or touching in other primates. It has been proposed endogenous opioid release after laughter would provide a neurochemical pathway supporting long-term relationships humans (Dunbar, 2012), yet this hypothesis currently lacks direct neurophysiological support. We used PET and μ-opioid-receptor (MOR)-specific ligand [11C]carfentanil to quantify laughter-induced 12 healthy males. Before scan,...
Recent studies have yielded contradictory evidence on whether visual speech perception (watching articulatory gestures) can activate the human primary auditory cortex. To circumvent confounds due to inter-individual anatomical variation, we defined our subjects' Heschl's gyri and assessed blood oxygenation-dependent signal changes at 3 T within this confined region during observation of moving circles. Visual activated in nine subjects, with activation seven them extending area Activation...
ABSTRACT Standard tones of 1000 Hz and deviant 1250 were presented in random order, 1 stimulus/second. The probabilities the standards deviants 90% 10%, respectively. In one condition subject counted stimuli other he/she read a comic book. ERPs separately averaged to 1) standard preceding deviant, 2) “first deviant” preceded by at least 4 standards, 3) “second (an occasional immediately following deviant”), 4) first 5) second deviant,” 6) 7) deviant.” It was found that mismatch negativity...
Infrequent "deviant' auditory stimuli embedded in a homogeneous sequence of "standard' sounds evoke neuromagnetic mismatch field (MMF), which is assumed to reflect automatic change detection the brain. We investigated whether MMFs would reveal hemispheric differences cortical processing. Seven healthy adults were studied with whole-scalp neuromagnetometer. The sound sequence, delivered one ear at time, contained three infrequent deviants (differing from standards duration, frequency, or...