P. Wittich

ORCID: 0000-0002-7401-2181
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Quantum Chromodynamics and Particle Interactions
  • Particle Detector Development and Performance
  • Dark Matter and Cosmic Phenomena
  • Cosmology and Gravitation Theories
  • Neutrino Physics Research
  • Computational Physics and Python Applications
  • Astrophysics and Cosmic Phenomena
  • Black Holes and Theoretical Physics
  • Medical Imaging Techniques and Applications
  • Particle Accelerators and Free-Electron Lasers
  • Radiation Detection and Scintillator Technologies
  • Distributed and Parallel Computing Systems
  • Superconducting Materials and Applications
  • Advanced Data Storage Technologies
  • Atomic and Subatomic Physics Research
  • Algorithms and Data Compression
  • Nuclear reactor physics and engineering
  • International Science and Diplomacy
  • Particle accelerators and beam dynamics
  • Radiation Therapy and Dosimetry
  • Gamma-ray bursts and supernovae
  • Nuclear physics research studies
  • Nuclear Physics and Applications

Cornell University
2016-2025

Institute of High Energy Physics
2012-2024

A. Alikhanyan National Laboratory
2022-2024

University of Antwerp
2024

Fermi National Accelerator Laboratory
2013-2023

University of Notre Dame
2019-2021

University at Buffalo, State University of New York
2017-2021

University of Colorado System
2020

University of Massachusetts Amherst
2020

National and Kapodistrian University of Athens
2011-2017

Abstract New developments in liquid scintillators, high-efficiency, fast photon detectors, and chromatic sorting have opened up the possibility for building a large-scale detector that can discriminate between Cherenkov scintillation signals. Such could reconstruct particle direction species using light while also having excellent energy resolution low threshold of scintillator detector. Situated deep underground, utilizing new techniques computing reconstruction, this achieve unprecedented...

10.1140/epjc/s10052-020-7977-8 article EN cc-by The European Physical Journal C 2020-05-01

The Apollo Advanced Telecommunications Computing Architecture (ATCA) platform is an open-source design consisting of a generic "Service Module" (SM) and customizable "Command (CM), allowing for cost-effective use in applications such as the readout inner tracker Level-1 track trigger CMS Phase-II upgrade at HL-LHC. SM integrates intelligent IPMC, robust power entry conditioning systems, powerful system-on-module computer, flexible clock communication infrastructure. CM designed around two...

10.48550/arxiv.2501.03702 preprint EN arXiv (Cornell University) 2025-01-07

Abstract The Apollo Advanced Telecommunications Computing Architecture (ATCA) platform is an open-source design consisting of a generic “Service Module” (SM) and customizable “Command (CM), allowing for cost-effective use in applications such as the readout inner tracker Level-1 track trigger CMS Phase-II upgrade at HL-LHC. SM integrates intelligent IPMC, robust power entry conditioning systems, powerful system-on-module computer, flexible clock communication infrastructure. CM designed...

10.1088/1748-0221/20/04/c04001 article EN Journal of Instrumentation 2025-04-01

We describe the new CDF Level 2 Trigger, which was commissioned during Spring 2005. The upgrade necessitated by several factors that included increased bandwidth requirements, in view of growing instantaneous luminosity Tevatron, and need for a more robust system, since older system reaching limits maintainability. challenges designing were interfacing with many different upstream detector subsystems, processing larger volumes data at higher speed, minimizing impact on running experiment...

10.1109/tns.2006.871782 article EN IEEE Transactions on Nuclear Science 2006-04-01

One of the most computationally challenging problems expected for High-Luminosity Large Hadron Collider (HL-LHC) is determining trajectory charged particles during event reconstruction. Algorithms used at LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing need faster computational throughput, we have adapted Kalman-filter-based methods highly parallel, many-core SIMD architectures that...

10.1088/1748-0221/15/09/p09030 article EN Journal of Instrumentation 2020-09-22

The adoption of large-scale distributed computing presents new opportunities and challenges for the physicists analyzing data from Large Hadron Collider experiments. With petabytes to manage, effective use provenance is critical understanding results.

10.1109/mcse.2008.81 article EN Computing in Science & Engineering 2008-04-15

10.1016/j.nima.2011.10.024 article EN Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 2011-10-28

During the High Luminosity LHC, CMS detector will need charged particle tracking at hardware trigger level to maintain a manageable rate and achieve its physics goals. The tracklet approach is track-finding algorithm based on road-search that has been implemented commercially available FPGA technology. achieved high performance in completes within 3.4 μs Xilinx Virtex-7 FPGA. An overview of implementation an given, results are shown from demonstrator test stand system studies presented.

10.1051/epjconf/201715000016 article EN cc-by EPJ Web of Conferences 2017-01-01

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve theoretical gains these processors, it will be necessary parallelize algorithms exploit larger numbers lightweight cores specialized functions like large vector units. Track finding fitting is one most computationally challenging problems for event reconstruction in particle...

10.1051/epjconf/201612700010 article EN cc-by EPJ Web of Conferences 2016-01-01

The high instantaneous luminosities expected following the upgrade of Large Hadron Collider (LHC) to High-Luminosity LHC (HL-LHC) pose major experimental challenges for CMS experiment.A central component allow efficient operation under these conditions is reconstruction charged particle trajectories and their inclusion in hardwarebased trigger system.There are many involved achieving this: a large input data rate about 20-40 Tb/s; processing new batch every 25 ns, each consisting 15,000...

10.1088/1748-0221/15/06/p06024 article EN Journal of Instrumentation 2020-06-23

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors, but future will be even more exciting. In order to stay within power limits still obtain Moore's Law performance/price gains, it necessary parallelize algorithms exploit larger numbers lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi GPGPUs. Track finding...

10.1088/1742-6596/664/7/072008 article EN Journal of Physics Conference Series 2015-12-23

For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers been pushed into producing lower-power, multi-core processors such as GPGPU, ARM Intel MIC. Broad-based efforts from manufacturers developers devoted to making these user-friendly enough perform general computations. However, extracting performance larger number of cores, well specialized vector or SIMD units, requires special care algorithm design...

10.1051/epjconf/201715000006 article EN cc-by EPJ Web of Conferences 2017-01-01

Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen introduction lower-power, multi-core processors, but future will be even more exciting. In order to stay within power limits still obtain Moore's Law performance/price gains, it necessary parallelize algorithms exploit larger numbers lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi GPGPUs. Track finding...

10.1088/1742-6596/608/1/012057 article EN Journal of Physics Conference Series 2015-05-22

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. stay within power limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms exploit larger numbers lightweight cores specialized functions like large vector units. Track finding fitting is one most computationally challenging problems for event...

10.1109/nssmic.2015.7581932 preprint EN 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC) 2015-10-01

The CMS experiment will collect data from the proton-proton collisions delivered by Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV.The trigger system is designed cope with unprecedented luminosities and LHC bunch-crossing rates 40 MHz.The unique architecture only employs two levels.The Level-1 implemented using custom electronics, while High Level Trigger (HLT) based on software algorithms running large cluster of commercial processors, Event Filter Farm.We present major...

10.1088/1748-0221/4/10/p10005 article EN Journal of Instrumentation 2009-10-19

We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance manycore architectures. The combinatorial structure these algorithms is not immediately compatible an efficient SIMD (or SIMT) implementation; challenge for us to recast existing software so it can readily generate hundreds shared-memory threads that exploit underlying instruction set modern processors. show how data and associated tasks be organized in way conducive both...

10.1088/1742-6596/898/4/042051 article EN Journal of Physics Conference Series 2017-10-01

The CDF data acquisition and trigger system is being upgraded to significantly increase the bandwidth for upcoming high luminosity running of Tevatron Collider (run IIb). This paper focuses on upgrade level 2 (L2) decision crate. crate at heart L2 has interface with many different subsystems both upstream downstream. challenge this have a uniform design be able paths upstream, merge process speed fast making, minimize impact experiment during commissioning phase. In order meet challenge,...

10.1109/nssmic.2004.1462389 article EN IEEE Symposium Conference Record Nuclear Science 2004. 2005-08-10

Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed ample amounts of fine-grained if they are realize the full hardware. This requirement can challenging algorithms that naturally expressed as a sequence small-matrix operations, such Kalman filter methods widely in use high-energy physics experiments. In High-Luminosity Large...

10.1088/1742-6596/1085/4/042016 article EN Journal of Physics Conference Series 2018-09-01

Interest in parallel architectures applied to real time selections is growing High Energy Physics (HEP) experiments. In this paper we describe performance measurements of Graphic Processing Units (GPUs) and Intel Many Integrated Core architecture (MIC) when a typical HEP online task: the selection events based on trajectories charged particles. We use as benchmark scaled-up version algorithm used at CDF experiment Tevatron for track reconstruction - SVT realistic test-case low-latency...

10.1088/1742-6596/513/1/012002 article EN Journal of Physics Conference Series 2014-06-11

The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms LHC are based on Kalman filter techniques with proven excellent physics performance under a variety conditions. Starting in 2014, we have been developing Kalman-filter-based methods for finding fitting adapted many-core SIMD processors that becoming dominant high-performance systems....

10.1051/epjconf/201921402002 article EN cc-by EPJ Web of Conferences 2019-01-01

The challenging conditions of the High-Luminosity LHC require tailored hardware designs for trigger and data acquisition systems. Apollo platform features a "Service Module" with powerful system-on-module computer that provides standard ATCA communications application-specific "Command Module"s large FPGAs high-speed optical fiber links. CMS version will be used track finder pixel readout. It up to two more than 100 links speeds 25\,Gb/s. We study carefully design performance board by using...

10.1088/1748-0221/17/04/c04033 article EN cc-by Journal of Instrumentation 2022-04-01
Coming Soon ...