- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Dark Matter and Cosmic Phenomena
- Computational Physics and Python Applications
- Neutrino Physics Research
- Cosmology and Gravitation Theories
- Distributed and Parallel Computing Systems
- Black Holes and Theoretical Physics
- Astrophysics and Cosmic Phenomena
- Parallel Computing and Optimization Techniques
- Advanced Data Storage Technologies
- Scientific Computing and Data Management
- Stochastic processes and financial applications
- Nuclear physics research studies
- Radiation Therapy and Dosimetry
- Noncommutative and Quantum Gravity Theories
- Nuclear reactor physics and engineering
- Big Data Technologies and Applications
- Particle Accelerators and Free-Electron Lasers
- Atomic and Subatomic Physics Research
- advanced mathematical theories
- Gamma-ray bursts and supernovae
- Quantum Mechanics and Applications
RWTH Aachen University
2019-2025
FH Aachen
2024-2025
Institute of High Energy Physics
2019-2024
A. Alikhanyan National Laboratory
2022-2024
University of Antwerp
2024
Ghent University Hospital
2022
Boeing (United States)
2003
Abstract Variable-dependent scale factors are commonly used in HEP to improve shape agreement of data and simulation. The choice the underlying model is great importance, but often requires a lot manual tuning e.g. bin sizes or fitted functions. This can be alleviated through use neural networks their inherent powerful modeling capabilities. We present novel generalized method for producing using an adversarial network. investigated context bottom-quark jet-tagging algorithms within CMS...
Abstract The development of an LHC physics analysis involves numerous investigations that require the repeated processing terabytes data. Thus, a rapid completion each these cycles is central to mastering science project. We present solution efficiently handle and accelerate analyses on small-size institute clusters. Our uses three key concepts: vectorized collision events, “MapReduce” paradigm for scaling out computing clusters, utilized SSD caching reduce latencies in IO operations. This...
Weapons System Open Architecture (WSOA) utilizes a distributed object computing foundation as "bridge" to enable communication between disparate real-time systems in the redirection of strike assets. This open connects legacy embedded mission and off-board C3I (command, control, information) sources systems. WSOA will establish potential gains warfighting capability due enabling technologies collaborative planning, information mining, adaptive resource management. paper focuses on...
VISPA (Visual Physics Analysis) is a web-platform that enables users to work on any secure shell (SSH) reachable resource using just their webbrowser. It used successfully in research and education for HEP data analysis. The emerging JupyterLab an ideal choice comprehensive, browser-based, extensible environment we seek unify it with the efforts of VISPA-project. primary objective provide user freedom access external resources at disposal, while maintaining smooth integration preconfigured...
Abstract Many particle physics analyses are adopting the concept of vectorised computing, often making them increasingly performant and resource-efficient. While a variety computing steps can be directly, some calculations challenging to implement. One these is analytical neutrino reconstruction which involves fitting that naturally varies between events. We show implementation using graph model. It uses established deep learning software libraries natively portable local external hardware...
Abstract Fast turnaround times for LHC physics analyses are essential scientific success. The ability to quickly perform optimizations and consolidation studies is critical. At the same time, computing demands complexities rising with upcoming data taking periods new technologies, such as deep learning. We present a show-case of HH→bbWW analysis at CMS experiment, where we process 𝒪(1 − 10)TB on 100 threads in few hours. This based columnar NanoAOD format, makes use NumPy ecosystem HEP...
Abstract Deep learning architectures in particle physics are often strongly dependent on the order of their input variables. We present a two-stage deep architecture consisting network for sorting objects and subsequent data analysis. The (agent) is trained through reinforcement using feedback from analysis (environment). optimal depends environment learned by agent an unsupervised approach. Thus, system can choose solution which not known to physicist advance. new approach its application...
The VISPA (VISual Physics Analysis) project provides a streamlined work environment for physics analyses and hands-on teaching experiences with focus on deep learning. has already been successfully used in HEP is now being further developed into an interactive learning platform. One specific example to meet knowledge sharing needs by combining paper, code data at central place. Additionally the possibility run it directly from web browser key feature of this development. Any SSH reachable...
The development of an LHC physics analysis involves numerous investigations that require the repeated processing terabytes data. Thus, a rapid completion each these cycles is central to mastering science project. We present solution efficiently handle and accelerate analyses on small-size institute clusters. Our based three key concepts: Vectorized collision events, "MapReduce" paradigm for scaling out computing clusters, utilized SSD caching reduce latencies in IO operations. Using...