Christoph Wissing

ORCID: 0000-0002-5090-8004
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Quantum Chromodynamics and Particle Interactions
  • Particle Detector Development and Performance
  • Dark Matter and Cosmic Phenomena
  • Computational Physics and Python Applications
  • Cosmology and Gravitation Theories
  • Neutrino Physics Research
  • Distributed and Parallel Computing Systems
  • Medical Imaging Techniques and Applications
  • Advanced Data Storage Technologies
  • Black Holes and Theoretical Physics
  • Gamma-ray bursts and supernovae
  • Scientific Computing and Data Management
  • Astrophysics and Cosmic Phenomena
  • Particle Accelerators and Free-Electron Lasers
  • Parallel Computing and Optimization Techniques
  • Atomic and Subatomic Physics Research
  • Laser-Plasma Interactions and Diagnostics
  • Nuclear reactor physics and engineering
  • Stochastic processes and financial applications
  • Statistical Mechanics and Entropy
  • Aerodynamics and Acoustics in Jet Flows
  • Nuclear physics research studies
  • Big Data Technologies and Applications

Deutsches Elektronen-Synchrotron DESY
2016-2025

Institute of High Energy Physics
2012-2024

University of Antwerp
2024

A. Alikhanyan National Laboratory
2022-2024

European Organization for Nuclear Research
2012-2014

Université Claude Bernard Lyon 1
2011

TU Dortmund University
2000

CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 carry out its ambitious physics program with and higher complexity events. During Run1 these resources were predominantly provided by a mix grid sites local batch resources. long shut down cloud infrastructures, diverse opportunistic HPC supercomputing centers made available CMS, which further complicated operations submission infrastructure. In this presentation we discuss effort adopt deploy...

10.1088/1742-6596/664/6/062031 article EN Journal of Physics Conference Series 2015-12-23

During the first run, CMS collected and processed more than 10B data events simulated 15B events. Up to 100k processor cores were used simultaneously 100PB of storage was managed. Each month petabytes moved hundreds users accessed samples. In this document we discuss operational experience from run. We present workflows flows that executed, tools services developed, operations shift models sustain system. Many techniques followed original computing planning, but some reactions difficulties...

10.1088/1742-6596/513/3/032040 article EN Journal of Physics Conference Series 2014-06-11

PUNCH4NFDI, funded by the Germany Research Foundation initially for five years, is a diverse consortium of particle, astro-, astroparticle, hadron and nuclear physics embedded in National Data Infrastructure initiative. In order to provide seamless federated access huge variety compute storage systems provided participating communities covering their very needs, Compute4PUNCH Storage4PUNCH concepts have been developed. Both comprise state-of-the-art technologies such as token-based AAI...

10.1051/epjconf/202429507020 article EN cc-by EPJ Web of Conferences 2024-01-01

Abstract The prompt reconstruction of the data recorded from Large Hadron Collider (LHC) detectors has always been addressed by dedicated resources at CERN Tier-0. Such workloads come in spikes due to nature operation accelerator and special high load occasions experiments have commissioned methods distribute (spill-over) a fraction sites outside CERN. present work demonstrates new way supporting Tier-0 environment provisioning elastically for such spilled-over workflows onto Piz Daint...

10.1007/s41781-020-00052-w article EN cc-by Computing and Software for Big Science 2021-02-08

This perspective article details the program of sustainable computing workshops launched within High Energy Physics department at Deutsches Elektronen-Synchrotron (DESY) in 2023. The workshop series targets scientific users National Analysis Facility, hosted DESY, order to promote software development and usage across a wide range applications. Details structure workshops, as well reception amongst participants, will be presented. In addition, plans for expansion outlined.

10.3389/fcomp.2024.1502784 article EN cc-by Frontiers in Computer Science 2024-12-12

The former CMS Run 2 High Level Trigger (HLT) farm is one of the largest contributors to compute resources, providing about 25k job slots for offline computing. This CPU was initially employed as an opportunistic resource, exploited during inter-fill periods, in LHC 2. Since then, it has become a nearly transparent extension capacity at CERN, being located on-site interaction point 5 (P5), where detector installed. resource been configured support execution critical tasks, such prompt data...

10.1051/epjconf/202429503036 article EN cc-by EPJ Web of Conferences 2024-01-01

The CMS Experiment is taking high energy collision data at CERN. computing infrastructure used to analyse the distributed round world in a tiered structure. In order use 7 Tier-1 sites, 50 Tier-2 sites and still growing number of about 30 Tier-3 software has be available those sites. Except for very few deployment removal managed centrally. Since team no local accounts remote all installation jobs have sent via Grid jobs. Via VOMS role job priority batch system gains write privileges area....

10.1088/1742-6596/331/7/072041 article EN Journal of Physics Conference Series 2011-12-23

The CMS experiment at the LHC relies on 7 Tier-1 centres of WLCG to perform majority its bulk processing activity, and archive data. During first run LHC, these two functions were tightly coupled as each was constrained process only data archived hierarchical storage. This lack flexibility in assignment workflows occasionally resulted uneven resource utilisation an increased latency delivery results physics community.

10.1088/1742-6596/664/4/042056 article EN Journal of Physics Conference Series 2015-12-23

The PUNCH4NFDI consortium brings together scientists from the German particle physics, hadron and nuclear astronomy, astro-particle physics communities to improve management (re-)use of scientific data these interrelated communities. PUNCH sciences have a long tradition building large instruments that are planned, constructed operated by international collaborations. While collaborations typically employ advanced tools for distribution, smaller-scale experiments often suffer very limited...

10.52825/cordi.v1i.261 article EN cc-by Proceedings of the Conference on Research Data Infrastructure 2023-09-07

In order to cope with the challenges expected during LHC Run 2 CMS put in a number of enhancements into main software packages and tools used for centrally managed processing. presentation we will highlight these improvements that allow deal increased trigger output rate, pileup evolution computing technology. The overall system aims at high flexibility, improved operational flexibility largely automated procedures. tight coupling workflow classes types sites has been drastically relaxed....

10.1088/1742-6596/898/5/052012 article EN Journal of Physics Conference Series 2017-10-01

DESY is one of the world-wide leading centres for research with particle accelerators and synchrotron light. At hadron-electron collider HERA three experiments are currently taking data will be operated until 2007. Since end August 2004 Production Grid on basis recent LCG-2 release. Its infrastructure used all activities, including national international projects. The adapting their Monte Carlo production schemes to Grid.

10.5170/cern-2005-002.1014 article EN 2004-01-01
Coming Soon ...