M. J. Kortelainen

ORCID: 0000-0003-2675-1606
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Quantum Chromodynamics and Particle Interactions
  • Particle Detector Development and Performance
  • Dark Matter and Cosmic Phenomena
  • Cosmology and Gravitation Theories
  • Computational Physics and Python Applications
  • Distributed and Parallel Computing Systems
  • Parallel Computing and Optimization Techniques
  • Medical Imaging Techniques and Applications
  • Advanced Data Storage Technologies
  • Neutrino Physics Research
  • Astrophysics and Cosmic Phenomena
  • Radiation Detection and Scintillator Technologies
  • Superconducting Materials and Applications
  • CCD and CMOS Imaging Sensors
  • Radiation Therapy and Dosimetry
  • Nuclear reactor physics and engineering
  • Scientific Computing and Data Management
  • International Science and Diplomacy
  • Black Holes and Theoretical Physics
  • Particle Accelerators and Free-Electron Lasers
  • Gas Dynamics and Kinetic Theory
  • Gamma-ray bursts and supernovae
  • Radioactive contamination and transfer

Fermi National Accelerator Laboratory
2017-2024

Institute of High Energy Physics
2014-2023

A. Alikhanyan National Laboratory
2023

Purdue University West Lafayette
2017-2021

European Organization for Nuclear Research
2016-2019

University of Helsinki
2011-2015

Helsinki Institute of Physics
2009-2015

The High-Luminosity upgrade of the Large Hadron Collider (LHC) will see accelerator reach an instantaneous luminosity 7 × 10 34 cm −2 s −1 with average pileup 200 proton-proton collisions. These conditions pose unprecedented challenge to online and offline reconstruction software developed by experiments. computational complexity exceed far expected increase in processing power for conventional CPUs, demanding alternative approach. Industry High-Performance Computing (HPC) centers are...

10.3389/fdata.2020.601728 article EN cc-by Frontiers in Big Data 2020-12-21

One of the most computationally challenging problems expected for High-Luminosity Large Hadron Collider (HL-LHC) is determining trajectory charged particles during event reconstruction. Algorithms used at LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing need faster computational throughput, we have adapted Kalman-filter-based methods highly parallel, many-core SIMD architectures that...

10.1088/1748-0221/15/09/p09030 article EN Journal of Instrumentation 2020-09-22

10.1016/j.nima.2009.01.189 article EN Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 2009-02-11

The advent of computing resources with co-processors, for example Graphics Processing Units (GPU) or Field-Programmable Gate Arrays (FPGA), use cases like the CMS High-Level Trigger (HLT) data processing at leadership-class supercomputers imposes challenges current frameworks. These include developing a model algorithms to offload their computations on co-processors as well keeping traditional CPU busy doing other work. framework, CMSSW, implements multithreading using Intel Threading...

10.1051/epjconf/202024505009 article EN cc-by EPJ Web of Conferences 2020-01-01

Abstract For CMS, Heterogeneous Computing is a powerful tool to face the computational challenges posed by upgrades of LHC, and will be used in production at High Level Trigger during Run 3. In principle, offload work on non-CPU resources, while retaining their performance, different implementations same code are required. This would introduce code-duplication which not sustainable terms maintainability testability software. Performance portability libraries allow write once run it...

10.1088/1742-6596/2438/1/012058 article EN Journal of Physics Conference Series 2023-02-01

Today's world of scientific software for High Energy Physics (HEP) is powered by x86 code, while the future will be much more reliant on accelerators like GPUs and FPGAs. The portable parallelization strategies (PPS) project Center Computational Excellence (HEP/CCE) investigating solutions portability techniques that allow coding an algorithm once, ability to execute it a variety hardware products from many vendors, especially including accelerators. We think without these solutions, success...

10.48550/arxiv.2203.09945 preprint EN cc-by arXiv (Cornell University) 2022-01-01

The formation of clusters in the data analysis position-sensitive detectors is traditionally based on signal-to-noise ratio thresholds. For with a very low ratio, e.g., as result radiation damage, total collected charge obtained from biased to greater signal values resulting In this paper an unbiased method measure collection silicon strip detector test beam environment presented. constructing around impact point reference track.

10.1109/tns.2010.2050905 article EN IEEE Transactions on Nuclear Science 2010-07-21

The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms LHC are based on Kalman filter techniques with proven excellent physics performance under a variety conditions. Starting in 2014, we have been developing Kalman-filter-based methods for finding fitting adapted many-core SIMD processors that becoming dominant high-performance systems....

10.1051/epjconf/201921402002 article EN cc-by EPJ Web of Conferences 2019-01-01

The formation of clusters in the data analysis position-sensitive detectors is traditionally based on signal-to-noise ratio thresholds. For with a very low ratio, e.g. as result radiation damage, total collected charge obtained from biased to greater signal values resulting In this paper an unbiased method measure collection test beam environment presented. constructing around impact point reference track.

10.1109/nssmic.2009.5402394 article EN 2009-10-01

The CMS experiment started to utilize Graphics Processing Units (GPU) accelerate the online reconstruction and event selection running on its High Level Trigger (HLT) farm in 2022 data taking period. projections of HLT High-Luminosity LHC foresee a significant use compute accelerators Run 4 onwards order keep cost, size, power budget under control. This direction leveraging has synergies with increasing HPC resources HEP computing, as machines are employing more that predominantly GPUs...

10.1051/epjconf/202429511017 article EN cc-by EPJ Web of Conferences 2024-01-01

One of the most computationally challenging problems expected for High-Luminosity Large Hadron Collider (HL-LHC) is finding and fitting particle tracks during event reconstruction. Algorithms used at LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects error estimation. Recognizing need faster computational throughput, we have adapted Kalman-filter-based methods highly parallel, many-core SIMD SIMT architectures that are...

10.1051/epjconf/202024502013 article EN cc-by EPJ Web of Conferences 2020-01-01

High-energy physics (HEP) experiments have developed millions of lines code over decades that are optimized to run on traditional x86 CPU systems. However, we seeing a rapidly increasing fraction floating point computing power in leadership-class facilities and data centers coming from new accelerator architectures, such as GPUs. HEP now faced with the untenable prospect rewriting code, for increasingly dominant architectures found these computational accelerators. This task is made more...

10.48550/arxiv.2306.15869 preprint EN cc-by arXiv (Cornell University) 2023-01-01

Next generation High-Energy Physics (HEP) experiments are presented with significant computational challenges, both in terms of data volume and processing power. Using compute accelerators, such as GPUs, is one the promising ways to provide necessary power meet challenge. The current programming models for accelerators often involve using architecture-specific languages promoted by hardware vendors hence limit set platforms that code can run on. Developing software platform restrictions...

10.48550/arxiv.2401.14221 preprint EN cc-by arXiv (Cornell University) 2024-01-01

Next generation High-Energy Physics (HEP) experiments are presented with significant computational challenges, both in terms of data volume and processing power. Using compute accelerators, such as GPUs, is one the promising ways to provide necessary power meet challenge. The current programming models for accelerators often involve using architecture-specific languages promoted by hardware vendors hence limit set platforms that code can run on. Developing software platform restrictions...

10.1051/epjconf/202429511003 article EN cc-by EPJ Web of Conferences 2024-01-01

mkFit is an implementation of the Kalman filter-based track reconstruction algorithm that exploits both threadand data-level parallelism. In past few years project transitioned from R&D phase to deployment in Run-3 offline workflow CMS experiment. The tracking performs a series iterations, targeting tracks increasing difficulty after removing hits associated found previous iterations. has been adopted for several which contribute majority reconstructed tracks. When tested standard...

10.1051/epjconf/202429503019 article EN cc-by EPJ Web of Conferences 2024-01-01

In the past years landscape of tools for expressing parallel algorithms in a portable way across various compute accelerators has continued to evolve significantly. There are many technologies on market that provide portability between CPU, GPUs from several vendors, and some cases even FPGAs. These include C++ libraries such as Alpaka Kokkos, compiler directives OpenMP, SYCL open specification can be implemented library or compiler, standard where is solely responsible offloading. Given...

10.1051/epjconf/202429511008 article EN cc-by EPJ Web of Conferences 2024-01-01

Traditionally, high energy physics (HEP) experiments have relied on x86 CPUs for the majority of their significant computing needs. As field looks ahead to next generation such as DUNE and High-Luminosity LHC, demands are expected increase dramatically. To cope with this increase, it will be necessary take advantage all available resources, including GPUs from different vendors. A broad landscape code portability tools -- compiler pragma-based approaches, abstraction libraries, other allow...

10.48550/arxiv.2409.09228 preprint EN arXiv (Cornell University) 2024-09-13

Traditionally, high energy physics (HEP) experiments have relied on x86 CPUs for the majority of their significant computing needs. As field looks ahead to next generation such as DUNE and High-Luminosity LHC, demands are expected increase dramatically. To cope with this increase, it will be necessary take advantage all available resources, including GPUs from different vendors. A broad landscape code portability tools—including compiler pragma-based approaches, abstraction libraries, other...

10.3389/fdata.2024.1485344 article EN cc-by Frontiers in Big Data 2024-10-23

Programming for a diverse set of compute accelerators in addition to the CPU is challenge. Maintaining separate source code each architecture would require lots effort, and development new algorithms be daunting if it had repeated many times. Fortunately there are several portability technologies on market such as Alpaka, Kokkos, SYCL. These aim improve developer’s productivity by making possible use same different architectures. In this paper we heterogeneous pixel reconstruction from CMS...

10.1051/epjconf/202125103034 article EN cc-by EPJ Web of Conferences 2021-01-01

Building particle tracks is the most computationally intense step of event reconstruction at LHC. With increased instantaneous luminosity and associated increase in pileup expected from High-Luminosity LHC, computational challenge track finding fitting requires novel solutions. The current algorithms used LHC are based on Kalman filter methods that achieve good physics performance. By adapting techniques for use many-core SIMD architectures such as Intel Xeon Phi (to a limited degree) NVIDIA...

10.48550/arxiv.1906.11744 preprint EN cc-by arXiv (Cornell University) 2019-01-01

An OpenStack based private cloud with the Cluster File System has been built and used both CMS analysis Monte Carlo simulation jobs in Datacenter Indirection Infrastructure for Secure High Energy Physics (DII-HEP) project. On we run ARC middleware that allows running applications without changes on job submission side. Our test results indicate adopted approach provides a scalable resilient solution managing resources compromising performance high availability.

10.1088/1742-6596/608/1/012010 article EN Journal of Physics Conference Series 2015-05-22

The LHC simulation frameworks are already confronting the High Luminosity (HL-LHC) era. In order to design and evaluate performance of HL-LHC detector upgrades, realistic simulations future detectors extreme luminosity conditions they may encounter have be simulated now. use many individual minimum-bias interactions model pileup poses several challenges CMS Simulation framework, including huge memory consumption, increased computation time, necessary handling large numbers event files during...

10.1109/escience.2018.00090 article EN 2018-10-01
Coming Soon ...