Franziska Mueller

ORCID: 0000-0003-2036-9238
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Human Pose and Action Recognition
  • Hand Gesture Recognition Systems
  • Advanced Vision and Imaging
  • Video Surveillance and Tracking Methods
  • Human Motion and Animation
  • Advanced Neural Network Applications
  • Robot Manipulation and Learning
  • Medical Imaging Techniques and Applications
  • Advanced Memory and Neural Computing
  • Face recognition and analysis
  • Advanced MRI Techniques and Applications
  • Virtual Reality Applications and Impacts
  • Ferroelectric and Negative Capacitance Devices
  • Gait Recognition and Analysis
  • Medical Imaging and Analysis
  • Scientific and Engineering Research Topics
  • 3D Shape Modeling and Analysis
  • Acute Lymphoblastic Leukemia research
  • Anatomy and Medical Technology
  • Digital Radiography and Breast Imaging
  • Interactive and Immersive Displays
  • Generative Adversarial Networks and Image Synthesis
  • Adversarial Robustness in Machine Learning
  • Gaze Tracking and Assistive Technology
  • Congenital heart defects research

Google (United States)
2021-2024

Imperial College London
2023

LMU Klinikum
2022

Ludwig-Maximilians-Universität München
2022

Google (Switzerland)
2022

Max Planck Institute for Informatics
2015-2021

Max Planck Society
2017-2021

Heart and Diabetes Center North Rhine-Westphalia
2021

University of British Columbia
2020

Saarland University
2015-2019

We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our method combines convolutional neural network with kinematic model, such that it generalizes well to unseen data, is robust occlusions and varying camera viewpoints, leads anatomically plausible as temporally smooth motions. For training our CNN we propose novel approach for synthetic generation data geometrically consistent image-to-image translation network. To be more...

10.1109/cvpr.2018.00013 article EN 2018-06-01

We propose a new single-shot method for multi-person 3D pose estimation in general scenes from monocular RGB camera. Our approach uses novel occlusion-robust pose-maps (ORPM) which enable full body inference even under strong partial occlusions by other people and objects the scene. ORPM outputs fixed number of maps encode joint locations all Body part associations [8] allow us to infer an arbitrary without explicit bounding box prediction. To train our we introduce MuCo-3DHP, first large...

10.1109/3dv.2018.00024 article EN 2021 International Conference on 3D Vision (3DV) 2018-09-01

We present a real-time approach for multi-person 3D motion capture at over 30 fps using single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and other people. Our method subsequent stages. The first stage is convolutional neural network (CNN) that estimates 2D pose features along with identity assignments all visible joints of individuals.We contribute new architecture this CNN, called SelecSLS Net, uses novel selective long short range skip...

10.1145/3386569.3392410 article EN ACM Transactions on Graphics 2020-08-12

We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments. Existing methods typically fail hand-object interactions scenes imaged viewpoints, common virtual or augmented reality applications. Our uses two subsequently applied Convolutional Neural Networks (CNNs) to localize the regress 3D joint locations. Hand localization is achieved by using a CNN estimate 2D position of center input, even presence...

10.1109/iccv.2017.131 preprint EN 2017-10-01

Markerless tracking of hands and fingers is a promising enabler for human-computer interaction. However, adoption has been limited because inaccuracies, incomplete coverage motions, low framerate, complex camera setups, high computational requirements. In this paper, we present fast method accurately rapid articulations the hand using single depth camera. Our algorithm uses novel detectionguided optimization strategy that increases robustness speed pose estimation. detection step, randomized...

10.1109/cvpr.2015.7298941 preprint EN 2015-06-01

We present a novel method for real-time pose and shape reconstruction of two strongly interacting hands. Our approach is the first two-hand tracking solution that combines an extensive list favorable properties, namely it marker-less, uses single consumer-level depth camera, runs in real time, handles inter- intra-hand collisions, automatically adjusts to user's hand shape. In order achieve this, we embed recent parametric model dense correspondence predictor based on deep neural network...

10.1145/3306346.3322958 article EN ACM Transactions on Graphics 2019-07-12

We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments. Existing methods typically fail hand-object interactions scenes imaged viewpoints-common virtual or augmented reality applications. Our uses two subsequently applied Convolutional Neural Networks (CNNs) to localize the regress 3D joint locations. Hand localization is achieved by using a CNN estimate 2D position of center input, even presence...

10.1109/iccvw.2017.82 article EN 2017-10-01

Heterozygous loss of function mutations within the Filamin A gene in Xq28 are most frequent cause bilateral neuronal periventricular nodular heterotopia (PVNH). Most affected females reported to initially present with difficult treat seizures at variable age onset. Psychomotor development and cognition may be normal or mildly moderately impaired. Distinct associated extracerebral findings have been observed help establish diagnosis including patent ductus arteriosus Botalli, progressive...

10.1186/s13023-015-0331-9 article EN cc-by Orphanet Journal of Rare Diseases 2015-10-15

Tracking and reconstructing the 3D pose geometry of two hands in interaction is a challenging problem that has high relevance for several human-computer applications, including AR/VR, robotics, or sign language recognition. Existing works are either limited to simpler tracking settings ( e.g. , considering only single hand spatially separated hands), rely on less ubiquitous sensors, such as depth cameras. In contrast, this work we present first real-time method motion capture skeletal...

10.1145/3414685.3417852 article EN ACM Transactions on Graphics 2020-11-27

Single-hand thumb-to-finger microgestures have shown great promise for expressive, fast and direct interactions. However, pioneering gesture recognition systems each focused on a particular subset of gestures. We are still in lack that can detect the set possible gestures to fuller extent. In this paper, we present consolidated design space microgestures. Based space, system using depth sensing convolutional neural networks. It is first accurately detects touch points between fingers as well...

10.1145/3279778.3279799 article EN 2018-11-19

3D hand pose estimation from monocular videos is a long-standing and challenging problem, which now seeing strong upturn. In this work, we address it for the first time using single event camera, i.e., an asynchronous vision sensor reacting on brightness changes. Our EventHands approach has characteristics previously not demonstrated with RGB or depth camera such as high temporal resolution at low data throughputs real-time performance 1000 Hz. Due to different modality of cameras compared...

10.1109/iccv48922.2021.01216 article EN 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021-10-01

Tracking and reconstructing the 3D pose geometry of two hands in interaction is a challenging problem that has high relevance for several human-computer applications, including AR/VR, robotics, or sign language recognition. Existing works are either limited to simpler tracking settings (e.g., considering only single hand spatially separated hands), rely on less ubiquitous sensors, such as depth cameras. In contrast, this work we present first real-time method motion capture skeletal surface...

10.48550/arxiv.2106.11725 preprint EN cc-by-nc-nd arXiv (Cornell University) 2021-01-01

The physical properties of an object, such as mass, significantly affect how we manipulate it with our hands. Surprisingly, this aspect has so far been neglected in prior work on 3D motion synthesis. To improve the naturalness synthesized hand-object motions, proposes MACS–the first MAss Conditioned hand and object Synthesis approach. Our approach is based cascaded diffusion models generates interactions that plausibly adjust object's mass interaction type. MACS also accepts a manually drawn...

10.1109/3dv62453.2024.00082 article EN 2021 International Conference on 3D Vision (3DV) 2024-03-18

Abstract Aims Heart transplantation (HTx) represents optimal care for advanced heart failure. Left ventricular assist devices (LVADs) are often needed as a bridge‐to‐transplant (BTT) therapy to support patients during the wait donor organ. Prolonged increases risk LVAD complications that may affect outcome after HTx. Methods and results A total of 342 undergoing HTx BTT in 10‐year period two German high‐volume centres were retrospectively analysed. While 73 transplanted without with regular...

10.1002/ehf2.13188 article EN ESC Heart Failure 2021-01-21

A unique challenge in creating high-quality animatable and relightable 3D avatars of people is modeling human eyes. The synthesizing eyes multifold as it requires 1) appropriate representations for the various components eye periocular region coherent viewpoint synthesis, capable representing diffuse, refractive highly reflective surfaces, 2) disentangling skin appearance from environmental illumination such that may be rendered under novel lighting conditions, 3) capturing eyeball motion...

10.1145/3528223.3530130 article EN ACM Transactions on Graphics 2022-07-01

Abstract Eye gaze and expressions are crucial non‐verbal signals in face‐to‐face communication. Visual effects telepresence demand significant improvements personalized tracking, animation, synthesis of the eye region to achieve true immersion. Morphable face models, combination with coordinate‐based neural volumetric representations, show promise solving difficult problem reconstructing intricate geometry (eyelashes) synthesizing photorealistic appearance variations (wrinkles specularities)...

10.1111/cgf.15041 article EN cc-by Computer Graphics Forum 2024-04-24

This paper introduces the first differentiable simulator of event streams, i.e., streams asynchronous brightness change signals recorded by cameras. Our enables non-rigid 3D tracking deformable objects (such as human hands, isometric surfaces and general watertight meshes) from leveraging an analysis-by-synthesis principle. So far, event-based reconstruction in 3D, like hands body, has been either tackled using explicit trajectories or large-scale datasets. In contrast, our method does not...

10.1109/cvprw53098.2021.00143 article EN 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021-06-01

We propose an automatic method for generating high-quality annotations depth-based hand segmentation, and introduce a large-scale segmentation dataset. Existing datasets are typically limited to single hand. By exploiting the visual cues given by RGBD sensor pair of colored gloves, we automatically generate dense two segmentation. This lowers cost/complexity creating high quality datasets, makes it easy expand dataset in future. further show that existing even with data augmentation, not...

10.1109/crv.2019.00028 article EN 2019-05-01

We use our hands every day: to grasp a cup of coffee, write text on keyboard, or signal that we are about say something important. interact with environment and help us communicate each other without thinking it. Wouldn't it be great able do the same in virtual reality? However, accurate hand motions not trivial capture. In this course, present current state art when comes hands. Starting examples for controlling depicting reality (VR), dive into latest methods technologies capture motions....

10.1145/3415263.3419155 article EN 2020-11-17
Coming Soon ...