- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Quantum Chromodynamics and Particle Interactions
- Particle Detector Development and Performance
- Dark Matter and Cosmic Phenomena
- Computational Physics and Python Applications
- Cosmology and Gravitation Theories
- Neutrino Physics Research
- Distributed and Parallel Computing Systems
- Astrophysics and Cosmic Phenomena
- Black Holes and Theoretical Physics
- Advanced Data Storage Technologies
- Atomic and Subatomic Physics Research
- Scientific Computing and Data Management
- Particle Accelerators and Free-Electron Lasers
- Big Data Technologies and Applications
- Stochastic processes and financial applications
- Software System Performance and Reliability
- Nuclear reactor physics and engineering
- Laser-Plasma Interactions and Diagnostics
- Noncommutative and Quantum Gravity Theories
- Distributed systems and fault tolerance
- Gamma-ray bursts and supernovae
- Optical properties and cooling technologies in crystalline materials
- Medical Imaging Techniques and Applications
Istituto Nazionale di Fisica Nucleare, Sezione di Bologna
2019-2025
University of Bologna
2015-2025
Institute of High Energy Physics
2020-2024
A. Alikhanyan National Laboratory
2022-2024
University of Antwerp
2024
Istituto Nazionale di Fisica Nucleare, Sezione di Napoli
2022-2023
Istituto Nazionale di Fisica Nucleare, Sezione di Padova
2022
European Organization for Nuclear Research
2022
Istituto Nazionale di Fisica Nucleare, Centro Nazionale Analisi Fotogrammi
2019
A flexible and dynamic environment capable of accessing distributed data resources efficiently, is a key aspect for HEP analysis, especially the HL-LHC era. quasi-interactive declarative solution, like ROOT RDataFrame, with scale-up capabilities via open-source standards Dask, can profit from "HPC, Big Data Quantum Computing" Italian Center DataLake model under development. The starting point prototypal CMS high throughput analysis platform, offloaded on local Tier-2. This contribution...
The INFN-CNAF computing center, one of the Worldwide LHC Computing Grid Tier-1 sites, is serving a large set scientific communities, in High Energy Physics and beyond. In order to increase efficiency remain competitive long run, CNAF launching various activities aiming at implementing global predictive maintenance solution for site. This requires site-wide effort collecting, cleaning structuring all possibly useful data coming from log files services systems, as necessary step prior...
The distributed Grid infrastructure for High Energy Physics experiments at the Large Hadron Collider (LHC) in Geneva comprises a set of computing centres, spread all over world, as part Worldwide LHC Computing (WLCG). In Italy, Tier-1 functionalities are served by INFN-CNAF data center, which provides also and storage resources to more than twenty non-LHC experiments. For this reason, high amount logs collected each day from various sources, highly heterogeneous difficult harmonize....
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed resources. The current systems have proven to be mature capable meeting experiment goals, by allowing timely delivery results. However, substantial amount interventions from software developers, shifters operational teams is needed efficiently manage such heterogeneous infrastructures. A wealth data can...
During the first LHC run, CMS experiment collected tens of Petabytes collision and simulated data, which need to be distributed among dozens computing centres with low latency in order make efficient use resources. While desired level throughput has been successfully achieved, it is still common observe transfer workflows that cannot reach full completion a timely manner due small fraction stuck files require operator intervention.
The STOrage Resource Manager (StoRM) service implements the SRM specification to recall files from tape. Although protocol has been successfully used for many years, its complexity pushed WLCG community adopt a simpler approach, more coherent with current web te- chnologies. tape REST API offers common HTTP interface which allows clients manage disk residency of tape-stored and observe progress file transfers disk. In context StoRM project devel- oped at INFN-CNAF, Tape this interface. It is...
With the advent of High-Luminosity phase LHC (HL-LHC), instantaneous luminosity Large Hadron Collider at CERN is expected to increase up $\approx 7.5 \cdot 10^{34} cm^{-2}s^{-1}$. Therefore, new strategies for data acquisition and processing will be necessary, in preparation higher number signals produced inside detectors. In context an upgrade trigger system Compact Muon Solenoid (CMS), reconstruction algorithms, aiming improved performance, are being developed. For what concerns online...
As a joint effort from various communities involved in the Worldwide LHC Computing Grid, Operational Intelligence project aims at increasing level of automation computing operations and reducing human interventions. The distributed systems currently deployed by experiments have proven to be mature capable meeting experimental goals, allowing timely delivery scientific results. However, substantial number interventions software developers, shifters, operational teams is needed efficiently...
Signal-background classification is a central problem in High-Energy Physics (HEP), that plays major role for the discovery of new fundamental particles. A recent method -- Parametric Neural Network (pNN) leverages multiple signal mass hypotheses as an additional input feature to effectively replace whole set individual classifiers, each providing (in principle) best response corresponding hypothesis. In this work we aim at deepening understanding pNNs light real-world usage. We discovered...
Machine and Deep Learning techniques are being used in various areas of CMS operations at the LHC collider, like data taking, monitoring, processing physics analysis. A review a few selected use cases - with focus on software computing shows progress field, highlight most recent developments, as well an outlook to future applications Run III towards High-Luminosity phase.
Tens of Petabytes collision and simulated data have been collected distributed across WLCG sites in Run-1 Run-2 at LHC. A low latency transfers among dozens computing centres is crucial to make an efficient use the resources. Despite on average desired level throughput has successfully achieved serve LHC physics programs, it not uncommon observe transfer latencies caused by a large variety causes, from file corruptions site issues, most which require operator intervention. To improve this...
After the high-luminosity upgrade of LHC, muon chambers CMS Barrel must cope with an increase in number interactions per bunch crossing. Therefore, new algorithmic techniques for data acquisition and processing will be necessary preparation such a high pile-up environment. Using Machine Learning as technique to tackle this problem, paper focuses production models - obtained through Monte Carlo simulations capable predicting transverse momentum muons crossing chambers, comparing them ($p_T$)...
Signal-background classification is a central problem in High-Energy Physics (HEP), that plays major role for the discovery of new fundamental particles. A recent method -- Parametric Neural Network (pNN) leverages multiple signal mass hypotheses as an additional input feature to effectively replace whole set individual classifiers, each providing (in principle) best response corresponding hypothesis. In this work we aim at deepening understanding pNNs light real-world usage. We discovered...