- EEG and Brain-Computer Interfaces
- Blind Source Separation Techniques
- Neural dynamics and brain function
- Hearing Loss and Rehabilitation
- Speech and Audio Processing
- Neural Networks and Applications
- CCD and CMOS Imaging Sensors
- Sparse and Compressive Sensing Techniques
- Distributed Sensor Networks and Detection Algorithms
- Analog and Mixed-Signal Circuit Design
- Neuroscience and Music Perception
- Multisensory perception and integration
- Advanced Neuroimaging Techniques and Applications
- Target Tracking and Data Fusion in Sensor Networks
- Gaze Tracking and Assistive Technology
- Functional Brain Connectivity Studies
- Advanced Memory and Neural Computing
- Tensor decomposition and applications
- Advanced Adaptive Filtering Techniques
- Tactile and Sensory Interactions
- Neural Networks and Reservoir Computing
- Indoor and Outdoor Localization Technologies
- Music and Audio Processing
- Neural and Behavioral Psychology Studies
KU Leuven
2018-2025
Dynamic Systems (United States)
2024-2025
Institute of Electrical and Electronics Engineers
2021
Signal Processing (United States)
2021
Université Grenoble Alpes
2021
Objective: Noise reduction algorithms in current hearing devices lack informationabout the sound source a user attends to when multiple sources are present. To resolve this issue, they can be complemented with auditory attention decoding (AAD) algorithms, which decode using electroencephalography (EEG) sensors. State-of-the-art AAD employ stimulus reconstruction approach, envelope of attended is reconstructed from EEG and correlated envelopes individual sources. This however, performs poorly...
Abstract Objective. Spatial auditory attention decoding (Sp-AAD) refers to the task of identifying direction speaker which a person is attending in multi-talker setting, based on listener’s neural recordings, e.g. electroencephalography (EEG). The goal this study thoroughly investigate potential biases when training such Sp-AAD decoders EEG data, particularly eye-gaze and latent trial-dependent confounds, may result models that decode or trial-specific fingerprints rather than spatial...
In a multi-speaker scenario, hearing aid lacks information on which speaker the user intends to attend, and therefore it often mistakenly treats latter as noise while enhancing an interfering speaker. Recently, has been shown that is possible decode attended from brain activity, e.g., recorded by electroencephalography sensors. While numerous of these auditory attention decoding (AAD) algorithms appeared in literature, their performance generally evaluated non-uniform manner. Furthermore,...
Selective auditory attention decoding (AAD) algorithms process brain data such as electroencephalography to decode which of multiple competing sound sources a person attends. Example use cases are neuro-steered hearing aids or communication via brain-computer interfaces (BCI). Recently, it has been shown that is possible train AAD decoders based on stimulus reconstruction in an unsupervised setting, where no ground truth available regarding source attended. In many practical scenarios,...
Many studies have demonstrated that auditory attention to natural speech can be decoded from EEG data. However, most focus on selective decoding (sAAD) with competing speakers, while the dynamics of absolute (aAAD) a single target remains underexplored. The goal aAAD is measure degree speaker, has applications for objective measurements in psychological and educational contexts. To investigate this paradigm, we designed an experiment where subjects listened video lecture under varying...
We present a wireless EEG sensor network consisting of two miniature, wireless, behind-the-ear nodes with size 2 cm x 3 cm, each containing 4-channel amplifier and radio. Each operates independently, having its own sampling clock, radio, local reference electrode, full electrical isolation from the other. The absence wire between enhances discreetness flexibility in deployment, improves miniaturization potential, reduces artifacts. A third identical node acts as USB dongle, which receives...
When multiple speakers talk simultaneously, a hearing device cannot identify which of these the listener intends to attend to. Auditory attention decoding (AAD) algorithms can provide this information by, for example, reconstructing attended speech envelope from electroencephalography (EEG) signals. However, stimulus reconstruction decoders are traditionally trained in supervised manner, requiring dedicated training stage during speaker is known. Pre-trained subject-independent alleviate...
Auditory attention decoding (AAD) algorithms decode the auditory from electroencephalography (EEG) signals that capture listener's neural activity. Such AAD methods are believed to be an important ingredient towards so-called neuro-steered assistive hearing devices. For example, traditional decoders allow detecting which of multiple speakers a listener is attending by reconstructing amplitude envelope attended speech signal EEG signals. Recently, alternative paradigm this stimulus...
The goal of auditory attention decoding (AAD) is to determine which speaker out multiple competing speakers a listener attending based on the brain signals recorded via, e.g., electroencephalography (EEG). AAD algorithms are fundamental building block so-called neuro-steered hearing devices that would allow identifying should be amplified activity. A common approach train subject-specific stimulus decoder reconstructs amplitude envelope attended speech signal. However, training this requires...
People suffering from hearing impairment often have difficulties participating in conversations so-called `cocktail party' scenarios with multiple people talking simultaneously. Although advanced algorithms exist to suppress background noise these situations, a device also needs information on which of speakers the user actually aims attend to. The correct (attended) speaker can then be enhanced using this information, and all other treated as noise. Recent neuroscientific advances shown...
In brain-computer interface or neuroscience applications, generalized canonical correlation analysis (GCCA) is often used to extract correlated signal components in the neural activity of different subjects attending same stimulus. This allows quantifying so-called inter-subject boosting signal-to-noise ratio stimulus-following brain responses with respect other (non-)neural activity. GCCA is, however, stimulus-unaware: it does not take stimulus information into account and therefore cope...
Atrial fibrillation (AF) is the most common cardiac arrhythmia, increasing risk of a stroke substantially. Hence, early and accurate detection AF paramount. We present matrixand tensor-based method for in singleand multi-lead electrocardiogram (ECG) signals. First, recordings are compressed into one heartbeat via singular value decomposition (SVD). These representative heartbeats, single-lead, collected matrix with modes time recordings. In case, we obtain tensor lead, recording. By modeling...
Electroencephalography (EEG) is a widely used technology for recording brain activity in brain-computer interface (BCI) research, where understanding the encoding-decoding relationship between stimuli and neural responses fundamental challenge. Recently, there growing interest natural single-trial setting, as opposed to traditional BCI literature multi-trial presentations of synthetic are commonplace. While EEG speech have been extensively studied, such stimulus-following video footage...
Various new brain-computer interface technologies or neuroscience applications require decoding stimulus-following neural responses to natural stimuli such as speech and video from, e.g., electroencephalography (EEG) signals. In this context, generalized canonical correlation analysis (GCCA) is often used a group technique, which allows the extraction of correlated signal components from activity multiple subjects attending same stimulus. GCCA can be improve signal-to-noise ratio relative...
Abstract Objective. In this study, we use electroencephalography (EEG) recordings to determine whether a subject is actively listening presented speech stimulus. More precisely, aim discriminate between an active condition, and distractor condition where subjects focus on unrelated task while being exposed We refer as absolute auditory attention decoding. Approach. re-use existing EEG dataset the watch silent movie introduce new with two conditions (silently reading text performing...
Various new brain-computer interface technologies or neuroscience applications require decoding stimulus-following neural responses to natural stimuli such as speech and video from, e.g., electroencephalography (EEG) signals. In this context, generalized canonical correlation analysis (GCCA) is often used a group technique, which allows the extraction of correlated signal components from activity multiple subjects attending same stimulus. GCCA can be improve signal-to-noise ratio relative...
Auditory attention decoding (AAD) is the process of identifying attended speech in a multi-talker environment using brain signals, typically recorded through electroencephalography (EEG). Over past decade, AAD has undergone continuous development, driven by its promising application neuro-steered hearing devices. Most algorithms are relying on increase neural entrainment to envelope speech, as compared unattended two-step approach. First, algorithm predicts representations signal envelopes;...
Selective attention enables humans to efficiently process visual stimuli by enhancing important locations or objects and filtering out irrelevant information. Locating is a fundamental problem in neuroscience with potential applications brain-computer interfaces. Conventional paradigms often use synthetic static images, but real life contain smooth highly irregular dynamics. In this study, we show that these dynamics natural videos can be decoded from electroencephalography (EEG) signals...
In a recent paper, we presented the KU Leuven audiovisual, gaze-controlled auditory attention decoding (AV-GC-AAD) dataset, in which recorded electroencephalography (EEG) signals of participants attending to one out two competing speakers under various audiovisual conditions. The main goal this dataset was disentangle direction gaze from attention, order reveal gaze-related shortcuts existing spatial AAD algorithms that aim decode (direction of) directly EEG. Various methods based on do not...
The authors have withdrawn their manuscript because they discovered an error in the analysis code after publication of preprint, which turns out to a major impact on main results paper. imagination data become non-significant correcting for mistake. Significant perception are preserved, although classification worse than what is reported. Therefore, do not wish this work be cited as reference. If you any questions, please contact last author.