- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Dark Matter and Cosmic Phenomena
- Computational Physics and Python Applications
- Cosmology and Gravitation Theories
- Distributed and Parallel Computing Systems
- Neutrino Physics Research
- Advanced Data Storage Technologies
- Scientific Computing and Data Management
- Parallel Computing and Optimization Techniques
- Medical Imaging Techniques and Applications
- Astrophysics and Cosmic Phenomena
- Black Holes and Theoretical Physics
- Cloud Computing and Resource Management
- Gamma-ray bursts and supernovae
- Nuclear reactor physics and engineering
- Atomic and Subatomic Physics Research
- International Science and Diplomacy
- Stochastic processes and financial applications
- CCD and CMOS Imaging Sensors
- Particle Accelerators and Free-Electron Lasers
- Peer-to-Peer Network Technologies
- Noncommutative and Quantum Gravity Theories
Karlsruhe Institute of Technology
2016-2025
Institute of High Energy Physics
2012-2024
A. Alikhanyan National Laboratory
2022-2024
University of Antwerp
2024
National Centre of Scientific Research "Demokritos"
2023
Institute of Nuclear and Particle Physics
2023
Max Planck Institute for Nuclear Physics
2014-2019
Institute of Particle Physics
2019
European Organization for Nuclear Research
2012-2015
RWTH Aachen University
2006-2012
This chapter of the report "Flavor in era LHC" Workshop discusses theoretical, phenomenological and experimental issues related to flavor phenomena charged lepton sector conserving CP-violating processes. We review current limits main theoretical models for structure fundamental particles. analyze consequences available data, setting constraints on explicit beyond standard model, presenting benchmarks discovery potential forthcoming measurements both at LHC low energy, exploring options...
The Worldwide LHC Computing Grid (WLCG) provides the robust computing infrastructure essential for experiments by integrating global resources into a cohesive entity. Simulations of different compute models present feasible approach evaluating future adaptations that are able to cope with increased demands. However, running these simulations incurs trade-off between accuracy and scalability. For example, while simulator DCSim can provide accurate results, it falls short on scaling size...
Abstract Increasing computing demands and concerns about energy efficiency in high-performance high-throughput are driving forces the search for more efficient ways to use available resources. Sharing resources of an underutilised cluster with a high workload increases cluster. The software COBalD/TARDIS can dynamically transparently integrate disintegrate such However, sharing also requires accounting. AUDITOR ( A cco u nting D ata Handl i ng T oolbox O pportunistic R esources) is modular...
Scientific collaborations require a strong computing infrastructure to successfully process and analyze data. While large-scale have access resources such as Analysis Facilities, small-scale often lack the establish maintain an instead operate with fragmented analysis environments, resulting in inefficiencies, hindering reproducibility thus creating additional challenges for collaboration that are not related experiment itself. We present scalable, lightweight maintainable Facility developed...
The data management elements in CMS are scalable, modular, and designed to work together. main components PhEDEx, the transfer location system; Data Booking Service (DBS), a metadata catalog; Aggregation (DAS), aggregate views provide them users services. Tens of thousands samples have been cataloged petabytes moved since run began. modular system has allowed optimal use appropriate underlying technologies. In this contribution we will discuss both Oracle NoSQL databases implement as well...
Abstract Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently highest energy accelerator is LHC at CERN, in Geneva, Switzerland. Each its four major detectors, such as CMS detector, produces dozens Petabytes data per year be analyzed by a large international collaboration. The processing carried out on Worldwide Computing Grid, that spans over more than 170 compute centers around world and used number particle physics experiments....
Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support plethora users and workflows, such as Worldwide LHC Computing Grid (WLCG), is not trivial. Due to complexity size these infrastructures, it feasible deploy experimental test-beds at large scales merely for purpose comparing evaluating alternate designs. An alternative study behaviours systems using simulation. This...
Lepton-flavour violating tau-decays are predicted in many extensions of the Standard Model at a rate observable future collider experiments. In this article we focus on decay tau to mu antimu, which is promising channel observe lepton-flavour violation Large Hadron Collider LHC. We present analytic expressions for differential width derived from model-independent effective Lagrangian with general four-fermion operators, and estimate experimental acceptance detecting antimu Specific emphasis...
During the first run, CMS collected and processed more than 10B data events simulated 15B events. Up to 100k processor cores were used simultaneously 100PB of storage was managed. Each month petabytes moved hundreds users accessed samples. In this document we discuss operational experience from run. We present workflows flows that executed, tools services developed, operations shift models sustain system. Many techniques followed original computing planning, but some reactions difficulties...
The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within experiment at LHC. To approach this problem we developed an integration test suite based on LifeCycle agent, a tool originally conceived stress-testing releases PhEDEx, data-placement tool. agent provides framework customising in arbitrary ways, can scale to levels activity well beyond those seen normal running. This means run...
To satisfy future computing demands of the Worldwide LHC Computing Grid (WLCG), opportunistic usage third-party resources is a promising approach. While means to make such compatible with WLCG requirements are largely satisfied by virtual machines and containers technologies, strategies acquire disband many from providers still focus current research. Existing meta-schedulers that manage in hitting limits their design when tasked heterogeneous diverse resource providers. provide as part...
The Data Bookkeeping Service 3 provides a catalog of event metadata for Monte Carlo and recorded data the Compact Muon Solenoid (CMS) experiment at Large Hadron Collider (LHC) CERN, Geneva. It comprises all necessary information tracking datasets, their processing history associations between runs, files on large scale about 200, 000 datasets more than 40 million files, which adds up in around 700 GB metadata. DBS is an essential part CMS Management Workload (DMWM) systems [1], kind...
Increased operational effectiveness and the dynamic integration of only temporarily available compute resources (opportunistic resources) becomes more important in next decade, due to scarcity for future high energy physics experiments as well desired cloud performance computing resources. This results a heterogenous environment, which gives rise huge challenges operation teams experiments. At Karlsruhe Institute Technology (KIT) we design solutions tackle these challenges. In order ensure...
The Data Bookkeeping Service (DBS) provides an event data catalog for Monte Carlo and recorded of the Compact Muon Solenoid (CMS) Experiment at Large Hadron Collider (LHC) CERN, Geneva.It contains all necessary information used tracking datasets, like their processing history associations between runs, files on a large scale about 10 5 datasets more than 7 files.The DBS is widely within CMS, since kind data-processing production, as well physics analysis done by user, are relying stored in DBS.
High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in Energy Physics (HEP). Together with the tremendously increasing amount data to be processed, this leads enormous challenges HEP storage systems, networks distribution computing resources analyses. Bringing close resource is a very promising approach solve limitations improve overall performance. However, achieving locality by placing multiple conventional caches...
The Computing Model of the CMS experiment [1] does not address transfering user-created data between different Grid sites. Due to limited resources a single site, distribution individual datasets sites is crucial ensure accessibility. In contrast official datasets, there are no special requirements for user (e.g. concerning quality). StoreResults service provides mechanism elevate central bookkeeping ensuring quality same as an dataset. This prerequisite further within dataset infrastructure.
Modern data processing increasingly relies on locality for performance and scalability, whereas the common HEP approaches aim uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes via coordinated caches.
This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis simulation workflows. Local a physics institute are extended by private commercial cloud sites, ranging from inclusion desktop clusters over to HPC centers.
Demand for computing resources in high energy physics (HEP) shows a highly dynamic behavior, while the provided by Worldwide LHC Computing Grid (WLCG) remains static. It has become evident that opportunistic such as High Performance (HPC) centers and commercial clouds are well suited to cover peak loads. However, utilization of these gives rise new levels complexity, e.g. need be managed dynamically HEP applications require very specific software environment usually not at resources....
Predicting the performance of various infrastructure design options in complex federated infrastructures with computing sites distributed over a wide area network that support plethora users and workflows, such as Worldwide LHC Computing Grid (WLCG), is not trivial. Due to complexity size these infrastructures, it feasible deploy experimental test-beds at large scales merely for purpose comparing evaluating alternate designs. An alternative study behaviours systems using simulation. This...
PUNCH4NFDI, funded by the Germany Research Foundation initially for five years, is a diverse consortium of particle, astro-, astroparticle, hadron and nuclear physics embedded in National Data Infrastructure initiative. In order to provide seamless federated access huge variety compute storage systems provided participating communities covering their very needs, Compute4PUNCH Storage4PUNCH concepts have been developed. Both comprise state-of-the-art technologies such as token-based AAI...
With the second run period of LHC, high energy physics collaborations will have to face increasing computing infrastructural needs. Opportunistic resources are expected absorb many computationally expensive tasks, such as Monte Carlo event simulation. This leaves dedicated HEP infrastructure with an increased load analysis tasks that in turn need process volume data. In addition storage capacities, a key factor for future is therefore input bandwidth available per core. Modern data relies on...