- Face recognition and analysis
- Face and Expression Recognition
- Advanced Image and Video Retrieval Techniques
- Autism Spectrum Disorder Research
- Emotion and Mood Recognition
- Advanced Vision and Imaging
- Robotics and Sensor-Based Localization
- Image Retrieval and Classification Techniques
- Social Robot Interaction and HRI
- Video Surveillance and Tracking Methods
- Music and Audio Processing
- 3D Shape Modeling and Analysis
- Speech and Audio Processing
- Visual Attention and Saliency Detection
- Child Development and Digital Technology
- Language Development and Disorders
- Digital Mental Health Interventions
- Child and Adolescent Psychosocial and Emotional Development
- Virology and Viral Diseases
- Mental Health Research Topics
- Child and Animal Learning Development
- Technology and Human Factors in Education and Health
- Fractal and DNA sequence analysis
- Obsessive-Compulsive Spectrum Disorders
- Gaze Tracking and Assistive Technology
Children's Hospital of Philadelphia
2018-2024
Center for Autism and Related Disorders
2017-2024
Queen Mary University of London
2014-2017
Istanbul Technical University
2012-2013
Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite major efforts, there are several open questions on what important cues to interpret facial expressions how encode them. this paper, we review progress across a range applications shed light these fundamental questions. We analyse state-of-the-art solutions by decomposing their pipelines into components, namely face registration,...
The extraction of descriptive features from sequences faces is a fundamental problem in facial expression analysis. Facial expressions are represented by psychologists as combination elementary movements known action units: each movement localised and its intensity specified with score that small when the subtle large pronounced. Inspired this approach, we propose novel data-driven feature framework represents variations linear basis functions, whose coefficients proportional to intensity....
Standardized, granular measurement of social communication behaviors, such as gaze during natural interactions, is needed for a range psychiatric applications including diagnosis and detecting clinical change in conditions autism. Computational approaches show promise automatically measuring behaviors within settings. This study aims to measure features from videos dyadic conversations, characterize autism-related differences, capture individual-level differences. 46 autistic Participants 36...
Local representations became popular for facial affect recognition as they efficiently capture the image discontinuities, which play an important role interpreting actions. We propose to use Zernike Moments (ZMs) [4] due their useful and compact description of discontinuities texture. Their main advantage in comparison well-established alternatives such Binary Patterns (LBPs) [5], is flexibility terms size level detail local description. introduce a ZM-based representation involves...
In this paper, we propose a new image representation called Local Zernike Moments (LZM) for face recognition. recent years, local representations such as Gabor and Binary Patterns (LBP) have attracted great interest due to their success in handling difficulties of study, aim develop an alternative further improve the recognition performance. We achieve by utilizing which been successfully used shape descriptors character modify global moments obtain computing at every pixel considering its...
Although automatic personality analysis has been studied extensively in recent years, it not yet adopted for real time applications and life practices. To the best of our knowledge, this demonstration is a first attempt at predicting widely used Big Five dimensions number social from nonverbal behavioural cues real-time. The proposed system aims to analyse behaviour person that interacts with small humanoid robot through live streaming camera, delivers predicted on fly.
Communication with humans is a multi-faceted phenomenon where the emotions, personality and non-verbal behaviours, as well verbal play significant role, human–robot interaction (HRI) technologies should respect this complexity to achieve efficient seamless communication. In paper, we describe design execution of five public demonstrations made two HRI systems that aimed at automatically sensing analysing human participants’ behaviour predicting their facial action units, expressions in real...
The Audio/Visual Mapping Personality Challenge and Workshop (MAPTRAITS) is a competition event that organised to facilitate the development of signal processing machine learning techniques for automatic analysis personality traits social dimensions. MAPTRAITS includes two sub-challenges, continuous space-time sub-challenge quantised sub-challenge. evaluated how systems predict variation perceived dimensions in time, whereas challenge ability overall shorter video clips. To analyse effect...
Julia Parish-Morris, Evangelos Sariyanidi, Casey Zampella, G. Keith Bartley, Emily Ferguson, Ashley A. Pallathra, Leila Bateman, Samantha Plate, Meredith Cola, Juhi Pandey, Edward S. Brodkin, Robert T. Schultz, Birkan Tunç. Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic. 2018.
The Audio/Visual Mapping Personality Challenge and Workshop (MAPTRAITS) is a competition event aimed at the comparison of signal processing machine learning methods for automatic visual, vocal and/or audio-visual analysis personality traits social dimensions, namely, extroversion, agreeableness, conscientiousness, neuroticism, openness, engagement, facial attractiveness, likability. MAPTRAITS aims to bring forth existing efforts major accomplishments in modelling dimensions both quantised...
Michael Hauser, Evangelos Sariyanidi, Birkan Tunc, Casey Zampella, Edward Brodkin, Robert Schultz, Julia Parish-Morris. Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology. 2019.
Head movements play a crucial role in social interactions. The quantification of communicative such as nodding, shaking, orienting, and backchanneling is significant behavioral mental health research. However, automated localization head within videos remains challenging computer vision due to their arbitrary start end times, durations, frequencies. In this work, we introduce novel efficient coding system for movements, grounded Birdwhistell's kinesics theory, automatically identify basic...
Accurate face registration is a key step for several image analysis applications. However, existing methods are prone to temporal drift errors or jitter among consecutive frames. In this paper, we propose an iterative rigid framework that estimates the misalignment with trained regressors. The input of regressors robust motion representation encodes between misaligned frame and reference frame(s), enables reliable performance under non-uniform illumination variations. Drift reduced when...
3D morphable model (3DMM) fitting on 2D data is traditionally done via unconstrained optimization with regularization terms to ensure that the result a plausible face shape and consistent set of landmarks. This paper presents inequality-constrained 3DMM as first alternative in optimization-based fitting. Inequality constraints 3DMM's coefficients face-like shapes without modifying objective function for smoothness, thus allowing more flexibility capture person-specific details. Moreover,...
Robotic telepresence aims to create a physical presence for remotely located human (teleoperator) by reproducing their verbal and nonverbal behaviours (e.g. speech, gestures, facial expressions) on robotic platform. In this work, we propose novel teleoperation system that combines the replication of expressions emotions (neutral, disgust, happiness, surprise) head movements fly humanoid robot Nao. Robots' expression is constrained behavioural capabilities. As Nao has static face, use LEDs...
Advances in computational behavior analysis have the potential to increase our understanding of behavioral patterns and developmental trajectories neurotypical individuals, as well individuals with mental health conditions marked by motor, social, emotional difficulties. This study focuses on investigating how head movement during face–to–face conversations vary age from childhood through adulthood. We rely computer vision techniques due their suitability for social behaviors naturalistic...
Autism spectrum disorder (ASD) is a neurodevelopmental condition characterized in part by difficulties verbal and nonverbal social communication. Evidence indicates that autistic people, compared to neurotypical peers, exhibit differences head movements, key form of Despite the crucial role movements communication, research on this cue relatively scarce other forms such as facial expressions gestures. There need for scalable, reliable, accurate instruments measuring directly within context...
In this demo session, a real-time automatic face detection and recognition system will be demonstrated. The system, which is implemented as desktop application with user interface, detects the faces in images that are grabbed from web camera using cascaded classifier consisting of Modified Census Transform features. Then, same method, it locates eyes mouth on each uses information to align faces. Finally, recognizes these aligned novel method called local Zernike moments. order improve...
This study proposes a novel image representation and demonstrates its advantages when used for face recognition. The proposed is obtained by computing the global moments, which are popular tools object especially character recognition, locally at each pixel, thus decomposing into set of images corresponding to different moment components. Our experiments on FERET database indicates superiority method over methods employing Gabor or LBP representations.
The growing use of cameras embedded in autonomous robotic platforms and worn by people is increasing the importance accurate global motion estimation (GME). However, existing GME methods may degrade considerably under illumination variations. In this paper, we address problem proposing a biologically-inspired method that achieves high accuracy presence We mimic early layers human visual cortex with spatio-temporal Gabor energy adopting pioneering model Adelson Bergen provide closed-form...
Separating facial pose and expression within images requires a camera model for 3D-to-2D mapping. The weak perspective (WP) has been the most popular choice; it is default, if not only option, in state-of-the-art analysis methods software. WP justified by supposition that its errors are negligible when subjects relatively far from camera, yet this claim never tested despite nearly 20 years of research. This paper critically examines suitability separating expression. First, we theoretically...