Matevž Tadel

ORCID: 0000-0001-8800-0045
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Quantum Chromodynamics and Particle Interactions
  • Particle Detector Development and Performance
  • Computational Physics and Python Applications
  • Dark Matter and Cosmic Phenomena
  • Cosmology and Gravitation Theories
  • Distributed and Parallel Computing Systems
  • Neutrino Physics Research
  • Advanced Data Storage Technologies
  • Medical Imaging Techniques and Applications
  • Astrophysics and Cosmic Phenomena
  • Particle Accelerators and Free-Electron Lasers
  • Scientific Computing and Data Management
  • Radiation Detection and Scintillator Technologies
  • Black Holes and Theoretical Physics
  • Gamma-ray bursts and supernovae
  • Atomic and Subatomic Physics Research
  • Distributed systems and fault tolerance
  • Algorithms and Data Compression
  • Radiation Effects in Electronics
  • Radiation Therapy and Dosimetry
  • Parallel Computing and Optimization Techniques
  • Big Data Technologies and Applications
  • Noncommutative and Quantum Gravity Theories

University of California, San Diego
2016-2025

University of California System
2024-2025

A. Alikhanyan National Laboratory
2022-2024

Institute of High Energy Physics
2019-2024

University of San Diego
2024

University of Antwerp
2024

University of Ljubljana
2000-2020

UC San Diego Health System
2015

University of California, Riverside
2012

Universidad Católica Santo Domingo
2012

While the LHC data movement systems have demonstrated ability to move at necessary throughput, we identified two weaknesses: latency for physicists access and complexity of tools involved. To address these, both ATLAS CMS begun federate regional storage using Xrootd. Xrootd, referring a protocol implementation, allows us provide all disk-resident from single virtual endpoint. This "redirector" discovers actual location redirects client appropriate site. The approach is particularly...

10.1088/1742-6596/396/4/042009 article EN Journal of Physics Conference Series 2012-12-13

Following the success of XRootd-based US CMS data federation, AAA project investigated extensions federation architecture by developing two sample implementations an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon open request is received and suitable when completely random access expected or it already known that be read. second implementation supports on-demand downloading partial files. Extensions to Hadoop Distributed File System have been...

10.1088/1742-6596/513/4/042044 article EN Journal of Physics Conference Series 2014-06-11

Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data becomes quite difficult when independent storage sites are involved because users burdened with learning the intricacies of accessing each system and keeping careful track location. We present alternate approach: Any Data, Time, Anywhere infrastructure. Combining several...

10.1109/bdc.2015.33 article EN 2015-12-01

Fireworks is a CMS event display which specialized for the physics studies case. This specialization allows us to use stylized rather than 3D-accurate representation when appropriate. Data handling greatly simplified by using only reconstructed information and ideal geometry. provides an easy-to-use interface physicist concentrate on data in he interested. presented via graphical textual views. built Eve subsystem of CERN ROOT project CMS's FWLite project. The was part recent code redesign...

10.1088/1742-6596/219/3/032014 article EN Journal of Physics Conference Series 2010-04-01

One of the most computationally challenging problems expected for High-Luminosity Large Hadron Collider (HL-LHC) is determining trajectory charged particles during event reconstruction. Algorithms used at LHC today rely on Kalman filtering, which builds physical trajectories incrementally while incorporating material effects and error estimation. Recognizing need faster computational throughput, we have adapted Kalman-filter-based methods highly parallel, many-core SIMD architectures that...

10.1088/1748-0221/15/09/p09030 article EN Journal of Instrumentation 2020-09-22

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. In order to achieve theoretical gains these processors, it will be necessary parallelize algorithms exploit larger numbers lightweight cores specialized functions like large vector units. Track finding fitting is one most computationally challenging problems for event reconstruction in particle...

10.1051/epjconf/201612700010 article EN cc-by EPJ Web of Conferences 2016-01-01

10.1016/s0168-9002(00)00258-8 article EN Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 2000-08-01

With the shift in LHC experiments from computing tiered model where data was prefetched and stored at site towards a bring on fly, came an opportunity. Since is now distributed to jobs using XrootD federation of data, clear opportunity for caching arose.

10.1088/1742-6596/1085/3/032025 article EN Journal of Physics Conference Series 2018-09-01

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors, but future will be even more exciting. In order to stay within power limits still obtain Moore's Law performance/price gains, it necessary parallelize algorithms exploit larger numbers lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi GPGPUs. Track finding...

10.1088/1742-6596/664/7/072008 article EN Journal of Physics Conference Series 2015-12-23

EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It designed as framework for object management offering hierarchical data organization, interaction via representations. Automatic creation of 2D projected views also supported. On the other hand, it can serve an event toolkit satisfying most HEP requirements: geometry, simulated reconstructed such hits, clusters, tracks calorimeter information. Special classes are available raw-data....

10.1088/1742-6596/219/4/042055 article EN Journal of Physics Conference Series 2010-04-01

For over a decade now, physical and energy constraints have limited clock speed improvements in commodity microprocessors. Instead, chipmakers been pushed into producing lower-power, multi-core processors such as GPGPU, ARM Intel MIC. Broad-based efforts from manufacturers developers devoted to making these user-friendly enough perform general computations. However, extracting performance larger number of cores, well specialized vector or SIMD units, requires special care algorithm design...

10.1051/epjconf/201715000006 article EN cc-by EPJ Web of Conferences 2017-01-01

We present a 128-channel analogue front-end chip, SCT 128A-HC, for readout of silicon strip detectors employed in the inner tracking LHC experiment. The chip is produced radiation hard DMILL technology. architecture and critical design issues are discussed. performance has been evaluated detail test bench presented paper. used to read out prototype modules compatible size, functionality with ATLAS base line modules. Several full size detector equipped SCT128A-HC chips built tested...

10.1109/23.872992 article EN IEEE Transactions on Nuclear Science 2000-08-01

Power density constraints are limiting the performance improvements of modern CPUs. To address this, we have seen introduction lower-power, multi-core processors, but future will be even more exciting. In order to stay within power limits still obtain Moore's Law performance/price gains, it necessary parallelize algorithms exploit larger numbers lightweight cores and specialized functions like large vector units. Example technologies today include Intel's Xeon Phi GPGPUs. Track finding...

10.1088/1742-6596/608/1/012057 article EN Journal of Physics Conference Series 2015-05-22

OpenGL has been promoted to become the main 3D rendering engine of ROOT framework. This required a major re-modularization support on all levels, from basic window-system specific interface medium-level object-representation and top-level scene management. new architecture allows seamless integration external scene-graph libraries into viewer as well inclusion scenes GUI OpenGL-based 3D-rendering frameworks.

10.1088/1742-6596/119/4/042028 article EN Journal of Physics Conference Series 2008-07-01

The University of California system maintains excellent networking between its campuses and a number other Universities in California, including Caltech, most them being connected at 100 Gbps. UCSD Caltech Tier2 centers have joined their disk systems into single logical caching system, with worker nodes from both sites accessing data disks either site. This successful setup has been place for the last two years. However, coherently managing multiple physical locations is not trivial requires...

10.1051/epjconf/202024504042 article EN cc-by EPJ Web of Conferences 2020-01-01

CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These pledge to are preparing them especially for run experiment's applications. But there more available opportunistically both on GRID local university research clusters which can be used We will present CMS' strategy use opportunistic prepare dynamically able its applications that reached through GRID, EC2 compliant cloud interfaces. Even ssh login nodes...

10.1088/1742-6596/513/6/062028 article EN Journal of Physics Conference Series 2014-06-11

Power density constraints are limiting the performance improvements of modern CPUs. To address this we have seen introduction lower-power, multi-core processors such as GPGPU, ARM and Intel MIC. stay within power limits but still obtain Moore's Law performance/price gains, it will be necessary to parallelize algorithms exploit larger numbers lightweight cores specialized functions like large vector units. Track finding fitting is one most computationally challenging problems for event...

10.1109/nssmic.2015.7581932 preprint EN 2021 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC) 2015-10-01

ALICE Event Visualization Environment (AliEVE) is based on ROOT and its GUI, 2D & 3D graphics classes. A small application kernel provides for registration management of visualization objects. CINT scripts are used as an extensible mechanism data extraction, selection processing well steering frequent event-related tasks. AliEVE event in offline high-level trigger frameworks. Mechanisms base-classes provided visual representation raw-data different detector-types described. Common...

10.1088/1742-6596/119/3/032036 article EN Journal of Physics Conference Series 2008-07-01

Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Algorithms should accordingly be designed ample amounts of fine-grained if they are realize the full hardware. This requirement can challenging algorithms that naturally expressed as a sequence small-matrix operations, such Kalman filter methods widely in use high-energy physics experiments. In High-Luminosity Large...

10.1088/1742-6596/1085/4/042016 article EN Journal of Physics Conference Series 2018-09-01

The High-Luminosity Large Hadron Collider at CERN will be characterized by greater pileup of events and higher occupancy, making the track reconstruction even more computationally demanding. Existing algorithms LHC are based on Kalman filter techniques with proven excellent physics performance under a variety conditions. Starting in 2014, we have been developing Kalman-filter-based methods for finding fitting adapted many-core SIMD processors that becoming dominant high-performance systems....

10.1051/epjconf/201921402002 article EN cc-by EPJ Web of Conferences 2019-01-01
Coming Soon ...