M. Kretz

ORCID: 0000-0002-0867-243X
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Particle Detector Development and Performance
  • Quantum Chromodynamics and Particle Interactions
  • Dark Matter and Cosmic Phenomena
  • Cosmology and Gravitation Theories
  • Parallel Computing and Optimization Techniques
  • Advanced Data Storage Technologies
  • Distributed and Parallel Computing Systems
  • Neutrino Physics Research
  • Black Holes and Theoretical Physics
  • Mathematics and Applications
  • Muon and positron interactions and applications
  • Computational Physics and Python Applications
  • Scientific Computing and Data Management
  • Distributed systems and fault tolerance
  • Interconnection Networks and Systems
  • graph theory and CDMA systems
  • Particle accelerators and beam dynamics
  • Nuclear reactor physics and engineering

University of Liverpool
2014-2019

GSI Helmholtz Centre for Heavy Ion Research
2010-2019

Heidelberg University
1974-2017

A. Alikhanyan National Laboratory
2013-2016

The University of Adelaide
2014-2016

Goethe University Frankfurt
2011-2016

Frankfurt Institute for Advanced Studies
2011-2016

Universitatea Națională de Știință și Tehnologie Politehnica București
2016

University of Glasgow
2015

Ludwig-Maximilians-Universität München
2015

SUMMARY It is an established trend that CPU development takes advantage of Moore's Law to improve in parallelism much more than scalar execution speed. This results higher hardware thread counts (MIMD) and improved vector units (SIMD), which the MIMD developments have received focus library research recent years. To make use latest improvements, SIMD must receive a stronger API because computational power can no longer be neglected often auto‐vectorizing compilers cannot generate necessary...

10.1002/spe.1149 article EN Software Practice and Experience 2011-12-08

10.1007/s00450-011-0161-5 article EN Computer Science - Research and Development 2011-04-11

The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second proton-proton collisions and 300 central heavy-ion collisions, corresponding an input data stream of 30 GB/s. In order fulfill time requirements, a fast tracker has been developed. algorithm combines Cellular Automaton method being used for pattern recognition Kalman Filter fitting found trajectories final track selection. was adapted run on Graphics Processing...

10.1109/tns.2011.2157702 article EN IEEE Transactions on Nuclear Science 2011-07-06

The online event reconstruction for the ALICE experiment at CERN requires processing capabilities to process central Pb-Pb collisions a rate of more than 200 Hz, corresponding an input data about 25 GB/s. particle trajectories in Time Projection Chamber (TPC) is most compute intensive step. TPC tracker implementation combines principle cellular automaton and Kalman filter. It has been accelerated by usage graphics cards (GPUs). A pipelined allows perform tracking on GPU, transfer,...

10.1088/1742-6596/396/1/012044 article EN Journal of Physics Conference Series 2012-12-13

We present a highly scalable demonstration of portable asynchronous many-task programming model and runtime system applied to grid-based adaptive mesh refinement hydrodynamic simulation double white dwarf merger with 14 levels that spans 17 orders magnitude in astrophysical densities. The code uses the C++ parallel is embodied HPX library being incorporated into ISO standard. represents significant shift from existing bulk synchronous models under consideration for exascale systems. Through...

10.1177/1094342018819744 article EN The International Journal of High Performance Computing Applications 2019-02-14

High Performance Linpack can maximize requirements throughout a computer system. An efficient multi-GPU double-precision general matrix multiply (DGEMM), together with adjustments to the HPL, is required utilize heterogeneous its full extent. The authors present resulting energy-efficiency measurements and suggest cluster design that multiple GPUs.

10.1109/mm.2011.66 article EN IEEE Micro 2011-07-26

ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at Hadron Collider (LHC) CERN, which today most powerful particle accelerator worldwide. The High Level Trigger (HLT) an online compute farm about 200 nodes, reconstructs events measured by detector in real-time. HLT uses a custom data-transport framework to distribute data and workload among nodes. employs several calibration-sensitive subdetectors, e.g. TPC (Time Projection Chamber). For precise reconstruction, has...

10.1088/1742-6596/664/8/082047 article EN Journal of Physics Conference Series 2015-12-23

The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second proton-proton collisions and 200 central heavy-ion collisions, corresponding an input data stream of 30 GB/s. In order fulfil time requirements, a fast tracker has been developed can optionally use GPU hardware accelerators. algorithm combines Cellular Automaton method being used for pattern recognition Kalman Filter fit found trajectories final track selection....

10.1109/rtc.2010.5750344 article EN 2010-05-01

Computers are essential in research and industry, but they also significant contributors to the worldwide power consumption. The LOEWE-CSC supercomputer addresses this problem by setting new standards environmental compatibility as well energy cooling efficiency for high-performance general-purpose computing. Designing a pervasively efficient compute center requires improvements multiple fields. hosting low-loss compute-center operates at overhead below 8% of computer power. General purpose...

10.1109/pdp.2013.55 article EN 2013-02-01

The ALFA framework is a joint development between ALICE Online-Offline and FairRoot teams. has distributed architecture, i.e. collection of highly maintainable, testable, loosely coupled, independently deployable processes. allows the developer to focus on building singlefunction modules with well-defined interfaces operations. communication independent processes handled by FairMQ transport layer. offers multiple implementations its abstract data interface, it integrates some popular...

10.1051/epjconf/202024505021 article EN cc-by EPJ Web of Conferences 2020-01-01

The ALICE High Level Trigger comprises a large computing cluster, dedicated interfaces and software applications. It allows on-line event reconstruction of the full data stream experiment at up to 25 GByte/s. commissioning campaign has passed an important phase since startup Large Hadron Collider in November 2009. system been transferred into continuous operation with focus on first simple trigger paper reports for time achieved performance central barrel region.

10.1109/tns.2011.2160093 article EN IEEE Transactions on Nuclear Science 2011-07-26
Coming Soon ...