- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Dark Matter and Cosmic Phenomena
- Cosmology and Gravitation Theories
- Neutrino Physics Research
- Computational Physics and Python Applications
- Astrophysics and Cosmic Phenomena
- Black Holes and Theoretical Physics
- Medical Imaging Techniques and Applications
- Particle Accelerators and Free-Electron Lasers
- Radiation Detection and Scintillator Technologies
- Distributed and Parallel Computing Systems
- Superconducting Materials and Applications
- Advanced Data Storage Technologies
- Atomic and Subatomic Physics Research
- Algorithms and Data Compression
- Nuclear reactor physics and engineering
- International Science and Diplomacy
- Particle accelerators and beam dynamics
- Radiation Therapy and Dosimetry
- Gamma-ray bursts and supernovae
- Nuclear physics research studies
- Nuclear Physics and Applications
Cornell University
2016-2025
Institute of High Energy Physics
2012-2024
A. Alikhanyan National Laboratory
2022-2024
University of Antwerp
2024
Fermi National Accelerator Laboratory
2013-2023
University of Notre Dame
2019-2021
University at Buffalo, State University of New York
2017-2021
University of Colorado System
2020
University of Massachusetts Amherst
2020
National and Kapodistrian University of Athens
2011-2017
Abstract New developments in liquid scintillators, high-efficiency, fast photon detectors, and chromatic sorting have opened up the possibility for building a large-scale detector that can discriminate between Cherenkov scintillation signals. Such could reconstruct particle direction species using light while also having excellent energy resolution low threshold of scintillator detector. Situated deep underground, utilizing new techniques computing reconstruction, this achieve unprecedented...
The Apollo Advanced Telecommunications Computing Architecture (ATCA) platform is an open-source design consisting of a generic "Service Module" (SM) and customizable "Command (CM), allowing for cost-effective use in applications such as the readout inner tracker Level-1 track trigger CMS Phase-II upgrade at HL-LHC. SM integrates intelligent IPMC, robust power entry conditioning systems, powerful system-on-module computer, flexible clock communication infrastructure. CM designed around two...
Abstract The Apollo Advanced Telecommunications Computing Architecture (ATCA) platform is an open-source design consisting of a generic “Service Module” (SM) and customizable “Command (CM), allowing for cost-effective use in applications such as the readout inner tracker Level-1 track trigger CMS Phase-II upgrade at HL-LHC. SM integrates intelligent IPMC, robust power entry conditioning systems, powerful system-on-module computer, flexible clock communication infrastructure. CM designed...
We describe the new CDF Level 2 Trigger, which was commissioned during Spring 2005. The upgrade necessitated by several factors that included increased bandwidth requirements, in view of growing instantaneous luminosity Tevatron, and need for a more robust system, since older system reaching limits maintainability. challenges designing were interfacing with many different upstream detector subsystems, processing larger volumes data at higher speed, minimizing impact on running experiment...
One of the most computationally challenging problems expected for High-Luminosity Large Hadron Collider (HL-LHC) is determining trajectory charged particles during event reconstruction. Algorithms used at LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing need faster computational throughput, we have adapted Kalman-filter-based methods highly parallel, many-core SIMD architectures that...
The adoption of large-scale distributed computing presents new opportunities and challenges for the physicists analyzing data from Large Hadron Collider experiments. With petabytes to manage, effective use provenance is critical understanding results.
During the High Luminosity LHC, CMS detector will need charged particle tracking at hardware trigger level to maintain a manageable rate and achieve its physics goals. The tracklet approach is track-finding algorithm based on road-search that has been implemented commercially available FPGA technology. achieved high performance in completes within 3.4 μs Xilinx Virtex-7 FPGA. An overview of implementation an given, results are shown from demonstrator test stand system studies presented.
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve theoretical gains these processors, it will be necessary parallelize algorithms exploit larger numbers lightweight cores specialized functions like large vector units. Track finding fitting is one most computationally challenging problems for event reconstruction in particle...
The high instantaneous luminosities expected following the upgrade of Large Hadron Collider (LHC) to High-Luminosity LHC (HL-LHC) pose major experimental challenges for CMS experiment.A central component allow efficient operation under these conditions is reconstruction charged particle trajectories and their inclusion in hardwarebased trigger system.There are many involved achieving this: a large input data rate about 20-40 Tb/s; processing new batch every 25 ns, each consisting 15,000...
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors, but future will be even more exciting. In order to stay within power limits still obtain Moore's Law performance/price gains, it necessary parallelize algorithms exploit larger numbers lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi GPGPUs. Track finding...
For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers been pushed into producing lower-power, multi-core processors such as GPGPU, ARM Intel MIC. Broad-based efforts from manufacturers developers devoted to making these user-friendly enough perform general computations. However, extracting performance larger number of cores, well specialized vector or SIMD units, requires special care algorithm design...
Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen introduction lower-power, multi-core processors, but future will be even more exciting. In order to stay within power limits still obtain Moore's Law performance/price gains, it necessary parallelize algorithms exploit larger numbers lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi GPGPUs. Track finding...
Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. stay within power limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms exploit larger numbers lightweight cores specialized functions like large vector units. Track finding fitting is one most computationally challenging problems for event...
The CMS experiment will collect data from the proton-proton collisions delivered by Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV.The trigger system is designed cope with unprecedented luminosities and LHC bunch-crossing rates 40 MHz.The unique architecture only employs two levels.The Level-1 implemented using custom electronics, while High Level Trigger (HLT) based on software algorithms running large cluster of commercial processors, Event Filter Farm.We present major...
We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance manycore architectures. The combinatorial structure these algorithms is not immediately compatible an efficient SIMD (or SIMT) implementation; challenge for us to recast existing software so it can readily generate hundreds shared-memory threads that exploit underlying instruction set modern processors. show how data and associated tasks be organized in way conducive both...
The CDF data acquisition and trigger system is being upgraded to significantly increase the bandwidth for upcoming high luminosity running of Tevatron Collider (run IIb). This paper focuses on upgrade level 2 (L2) decision crate. crate at heart L2 has interface with many different subsystems both upstream downstream. challenge this have a uniform design be able paths upstream, merge process speed fast making, minimize impact experiment during commissioning phase. In order meet challenge,...
Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed ample amounts of fine-grained if they are realize the full hardware. This requirement can challenging algorithms that naturally expressed as a sequence small-matrix operations, such Kalman filter methods widely in use high-energy physics experiments. In High-Luminosity Large...
Interest in parallel architectures applied to real time selections is growing High Energy Physics (HEP) experiments. In this paper we describe performance measurements of Graphic Processing Units (GPUs) and Intel Many Integrated Core architecture (MIC) when a typical HEP online task: the selection events based on trajectories charged particles. We use as benchmark scaled-up version algorithm used at CDF experiment Tevatron for track reconstruction - SVT realistic test-case low-latency...
The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms LHC are based on Kalman filter techniques with proven excellent physics performance under a variety conditions. Starting in 2014, we have been developing Kalman-filter-based methods for finding fitting adapted many-core SIMD processors that becoming dominant high-performance systems....
The challenging conditions of the High-Luminosity LHC require tailored hardware designs for trigger and data acquisition systems. Apollo platform features a "Service Module" with powerful system-on-module computer that provides standard ATCA communications application-specific "Command Module"s large FPGAs high-speed optical fiber links. CMS version will be used track finder pixel readout. It up to two more than 100 links speeds 25\,Gb/s. We study carefully design performance board by using...