- Face Recognition and Perception
- Neural dynamics and brain function
- Visual perception and processing mechanisms
- Face recognition and analysis
- Functional Brain Connectivity Studies
- Visual Attention and Saliency Detection
- Memory and Neural Mechanisms
- Neural and Behavioral Psychology Studies
- Neural Networks and Applications
- Face and Expression Recognition
- EEG and Brain-Computer Interfaces
- Aesthetic Perception and Analysis
- Multisensory perception and integration
- Evolutionary Psychology and Human Behavior
- Spatial Cognition and Navigation
- Body Image and Dysmorphia Studies
- Image Retrieval and Classification Techniques
- Neuroscience and Music Perception
- Robotics and Automated Systems
- Climate Change Communication and Perception
- Mental Health and Psychiatry
- Advanced Data Compression Techniques
- Mental Health Research Topics
- Neural Networks Stability and Synchronization
- Domain Adaptation and Few-Shot Learning
Edge Hill University
2020-2024
University of Glasgow
2008-2020
Cognitive Neuroimaging Lab
2017
Scuola Internazionale Superiore di Studi Avanzati
2006-2010
University of Edinburgh
2002
Identification of participants at clinical high-risk (CHR) for the development psychosis is an important objective current preventive efforts in mental health research. However, utility using web-based screening approaches to detect CHR population level has not been investigated. We tested a approach identify individuals. Potential were invited website via e-mail invitations, flyers, and invitation letters involving both general services. Two thousand two hundred seventy-nine completed...
In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture variations reflect mechanisms stimulus coding for recognition. Here, to settle understand information processing mechanisms, we unraveled function possibly simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw noise pictures sparsely sampled with small Gaussian...
Abstract The model of the brain as an information processing machine is a profound hypothesis in which neuroscience, psychology and theory computation are now deeply rooted. Modern neuroscience aims to network densely interconnected functional nodes. However, dynamic mechanisms perception cognition, it imperative understand networks at algorithmic level–i.e. flow that nodes code communicate. Here, using innovative methods (Directed Feature Information), we reconstructed examples possible...
A key to understanding visual cognition is determine "where", "when", and "how" brain responses reflect the processing of specific features that modulate categorization behavior-the "what". The N170 earliest Event-Related Potential (ERP) preferentially responds faces. Here, we demonstrate a paradigmatic shift necessary interpret as product an information network dynamically codes transfers face across hemispheres, rather than local stimulus-driven event. Reverse-correlation methods coupled...
Over the past decade, extensive studies of brain regions that support face, object, and scene recognition suggest these have a hierarchically organized architecture spans occipital temporal lobes [1-14], where visual categorizations unfold over first 250 ms processing [15-19]. This same is flexibly involved in multiple tasks require task-specific representations-e.g. categorizing object as "a car" or Porsche." While we partly understand when happen occipito-ventral pathway, next challenge to...
BackgroundNew global crises are emerging, while existing remain unabated. Coping with climate change, the radioactive water released into Pacific Ocean subsequent to Fukushima nuclear accident in Japan, and wars Ukraine Middle East (hereafter referred as wars) individual can negatively affect psychological health of young people, but little is known about compounded impact multiple crises. We aimed examine: (1) emotional responses people towards each crisis, (2) how aggregate levels...
Sensory information from the external world is inherently ambiguous, necessitating prior experience as a constraint on perception. Prolonged (adaptation) induces perception of ambiguous morph faces category different adapted category, suggesting sensitivity in underlying neural codes to differences between input and recent experience. Using magnetoencephalography, we investigated dynamics such experience-dependent visual coding by focusing timing responses morphs after facial expression...
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes input. Visual categorization consistently involves at least an early a late stage: occipito-temporal N170 event related potential stimulus encoding parietal P300 involved in perceptual decisions. Here we sought globally transforms its representations of face categories from their later decision stage over 400 ms time window encompassing events. We applied...
People track facial expression dynamics with ease to accurately perceive distinct emotions. Although the superior temporal sulcus (STS) appears possess mechanisms for perceiving changeable attributes such as expressions, nature of underlying neural computations is not known. Motivated by novel theoretical accounts, we hypothesized that visual and motor areas represent expressions anticipated motion trajectories. Using magnetoencephalography, show predictable transitions between fearful...
Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition specific visual stimuli. We examined how adaptation affects neural responses associated with categorization behavior, using face aftereffects. Adaptation a given facial category biases towards non-adapted categories presentation ambiguous morphs. explored hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated...
A key challenge in neuroimaging remains to understand where, when, and now particularly how human brain networks compute over sensory inputs achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved classic XOR, OR, AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation same produce task-specific outputs. In...
Fast and accurate face processing is critical for everyday social interactions, but it declines becomes delayed with age, as measured by both neural behavioral responses. Here, we addressed the challenge of understanding how aging changes information mechanisms to delay behavior. Young (20-36 years) older (60-86 adults performed basic interaction task detecting a versus noise while recorded their electroencephalogram (EEG). In each participant, using new theoretic framework reconstructed...
Abstract In the sciences of cognition, an influential idea is that brain makes predictions about incoming sensory information to reduce inherent ambiguity. visual hierarchy, this implies content originating in memory–the identity a face–propagates down disambiguate stimulus information. However, understanding powerful prediction-for-recognition mechanism will remain elusive until we uncover propagating from memory. Here, address foundational limitation with task ubiquitous humans–familiar...
Abstract Current models propose that the brain uses a multi-layered architecture to reduce high dimensional visual input lower representations support face, object and scene categorizations. However, understanding mechanisms such information reduction for behavior remains challenging. We addressed challenge using novel theoretic framework quantifies relationships between three key variables: single-trial randomly sampled from an ambiguous scene, source-space MEG responses perceptual decision...
Current theories of cognition are cast in terms information processing mechanisms that use mental representations. For example, consider the face identification representations to identify familiar faces under various conditions pose, illumination and ageing, or draw resemblance between family members. Providing an explanation these thus relies on showing how actual contents used. Yet, representational rarely characterized, which turn hinders knowledge mechanisms. Here, we address this...
Abstract A key to understanding visual cognition is determine where , when and how brain responses reflect the processing of specific features that modulate categorization behavior—the what. The N170 earliest Event-Related Potential (ERP) preferentially responds faces. Here, we demonstrate a paradigmatic shift necessary interpret as product an information network dynamically codes transfers face across hemispheres, rather than local stimulus-driven event. Reverse-correlation methods coupled...
Adaptation aftereffects are the tendency to perceive an ambiguous target stimulus, which follows adaptor as different from adaptor.A duration dependence of face adaptation has been demonstrated for durations at least 500ms, identity related judgments.Here we describe very brief (11.7ms-500ms) backwardly masked faces, on both expression and category judgments faces.We find significant minimum 23.5ms emotional expression, 47ms identity, but these abolished by backward masking with inverted...
Abstract Fast and accurate face processing is critical for everyday social interactions, but it declines becomes delayed with age, as measured by both neural behavioural responses. Here, we addressed the challenge of understanding how ageing changes information mechanisms to delay behaviour. Young (20-36 years) older (60-86 adults performed basic interaction task detecting a vs. noise while recorded their electroencephalogram (EEG). In each participant, using new theoretic framework...
Facial expressions are a rich information source from which observers infer the emotional states of others. Despite much understanding about brain regions that represent facial expressions, we do not yet know how representations these movements transform into judgments emotions in brain. We addressed this question 5 participants who judged emotion individual face called Action Units (AUs) while concurrently measured activity using magnetoencephalography (MEG). Stimuli were animations...
When we learn to discriminate between new faces, memorize the information that best identifies and discriminates them. To study these face memories, first parameterized of 97 identities using a recursive sinusoidal basis (12 orientations, 2 polarities 5 spatial frequencies), computed principle components this space (excluding 4 identities). We trained participants by naming Participants until they reached 100% accuracy (< 40 trials). By construction, memorized diagnostic enabling must be...