F. Fanzago

ORCID: 0000-0003-0336-5729
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Quantum Chromodynamics and Particle Interactions
  • Distributed and Parallel Computing Systems
  • Particle Detector Development and Performance
  • Advanced Data Storage Technologies
  • Dark Matter and Cosmic Phenomena
  • Computational Physics and Python Applications
  • Scientific Computing and Data Management
  • Medical Imaging Techniques and Applications
  • Cosmology and Gravitation Theories
  • Parallel Computing and Optimization Techniques
  • Neutrino Physics Research
  • Cloud Computing and Resource Management
  • Astrophysics and Cosmic Phenomena
  • Black Holes and Theoretical Physics
  • Distributed systems and fault tolerance
  • Radiation Detection and Scintillator Technologies
  • Electron and X-Ray Spectroscopy Techniques
  • Radiation Therapy and Dosimetry
  • Big Data and Business Intelligence
  • Gamma-ray bursts and supernovae
  • Research Data Management Practices
  • Nuclear reactor physics and engineering
  • Radio Astronomy Observations and Technology

Istituto Nazionale di Fisica Nucleare, Sezione di Padova
2015-2025

Institute of High Energy Physics
2010-2024

University of Antwerp
2024

A. Alikhanyan National Laboratory
2024

Istituto Nazionale di Fisica Nucleare, Sezione di Roma I
2023

University of Padua
2007-2018

University of Trento
2016-2018

Joint Institute for Nuclear Research
2016

Istituto Nazionale di Fisica Nucleare
2007-2015

Istituto Nazionale di Fisica Nucleare, Sezione di Pavia
2012-2014

The full optimization of the design and operation instruments whose functioning relies on interaction radiation with matter is a super-human task, due to large dimensionality space possible choices for geometry, detection technology, materials, data-acquisition, information-extraction techniques, interdependence related parameters. On other hand, massive potential gains in performance over standard, "experience-driven" layouts are principle within our reach if an objective function fully...

10.1016/j.revip.2023.100085 article EN cc-by-nc-nd Reviews in Physics 2023-05-25

The Cloud Area Padovana, deployed in 2014, is a scientific IaaS cloud, spread between two different sites: the INFN Padova Unit and Legnaro National Labs. It provides about 1100 logical cores 50 TB of storage. entire computing facility, owned by INFN, satisfies computational storage demands more than 100 users belonging to 30 research projects, mainly related HEP nuclear physics. data centre also has hosted operated since 2015 an independent cloud managing network, resources 10 departments...

10.1051/epjconf/201921407010 article EN cc-by EPJ Web of Conferences 2019-01-01

Data access is key to science driven by distributed high-throughput computing (DHTC), an essential technology for many major research projects such as High Energy Physics (HEP) experiments. However, achieving efficient data becomes quite difficult when independent storage sites are involved because users burdened with learning the intricacies of accessing each system and keeping careful track location. We present alternate approach: Any Data, Time, Anywhere infrastructure. Combining several...

10.1109/bdc.2015.33 article EN 2015-12-01

Beginning in 2009, the CMS experiment will produce several petabytes of data each year which be distributed over many computing centres geographically different countries. The model defines how is to and accessed enable physicists efficiently run their analyses data. analysis performed a way using Grid infrastructure. CRAB (CMS remote builder) specific tool, designed developed by collaboration, that allows end user transparently access interacts with local environment, management services...

10.1109/tns.2009.2028076 article EN IEEE Transactions on Nuclear Science 2009-10-01

The CMS experiment at LHC started using the Resource Broker (by EDG and LCG projects) to submit Monte Carlo production analysis jobs distributed computing resources of WLCG infrastructure over 6 years ago. Since 2006 gLite Workload Management System (WMS) Logging & Bookkeeping (LB) are used. interaction with gLite-WMS/LB happens through frameworks, respectively ProdAgent CRAB, a common component, BOSSLite. important improvements recently made in as well tools intrinsic independence different...

10.1088/1742-6596/219/6/062007 article EN Journal of Physics Conference Series 2010-04-01

The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning diagnosis, high-interest physics analysis requiring fast-turnaround. In addition the low latency requirement on batch farm, another mandatory condition is efficient access RAW data stored at Tier-0 facility. CAF also foresees resources for interactive login by number collaborators located CERN, as an entry...

10.1088/1742-6596/219/5/052022 article EN Journal of Physics Conference Series 2010-04-01

During normal data taking CMS expects to support potentially as many 2000 analysis users. Since the beginning of 2008 there have been more than 800 individuals who submitted a remote job computing infrastructure. The bulk these users will be supported at over 40 Tier-2 centres. Supporting globally distributed community on set clusters is task that requires reconsidering methods user for Analysis Operations. In formed an Support Task Force in preparation large-scale physics activities. charge...

10.1088/1742-6596/219/7/072007 article EN Journal of Physics Conference Series 2010-04-01

The full optimization of the design and operation instruments whose functioning relies on interaction radiation with matter is a super-human task, given large dimensionality space possible choices for geometry, detection technology, materials, data-acquisition, information-extraction techniques, interdependence related parameters. On other hand, massive potential gains in performance over standard, "experience-driven" layouts are principle within our reach if an objective function fully...

10.48550/arxiv.2203.13818 preprint EN other-oa arXiv (Cornell University) 2022-01-01

The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number events that will be generated when detector starts taking data. During 2004 undertook scale data challenge to demonstrate ability cope with sustained data-taking rate equivalent 25% startup rate. Its goals were: run event reconstruction at CERN for period 25 Hz input rate; distribute several regional centers; enable access those centers analysis. Grid middleware was...

10.1109/tns.2005.852755 article EN IEEE Transactions on Nuclear Science 2005-08-01

CMS has a distributed computing model, based on hierarchy of tiered regional centres.However, the end physicist is not interested in details model nor complexity underlying infrastructure, but only to access and use efficiently easily remote services.The Remote Analysis Builder (CRAB) official tool that allows data transparent way.We present current development direction, which focused improving interface presented user adding intelligence CRAB such it can be used automate more work done...

10.1088/1742-6596/219/7/072019 article EN Journal of Physics Conference Series 2010-04-01

Starting from 2008, the CMS experiment will produce several Pbytes of data every year, to be distributed over many computing centers geographically in different countries. The model defines how has and accessed order enable physicists run efficiently their analysis data. thus performed a way using Grid infrastructure. CRAB (CMS Remote Analysis Builder) is specific tool, designed developed by collaboration, that allows transparent access end physicist. interacts with local user environment,...

10.1109/nssmic.2008.4774652 article EN IEEE Nuclear Science Symposium conference record 2008-10-01

The User Analysis of the CMS experiment is performed in distributed way using both Grid and dedicated resources. In order to insulate users from details computing fabric, relies on CRAB (CMS Remote Builder) package as an abstraction layer. has recently switched a client-server version purely client-based solution, with ssh being used interface HTCondor-based glideinWMS batch system. This switch resulted significant improvement user satisfaction, well simplification code base operation...

10.1088/1742-6596/513/3/032006 article EN Journal of Physics Conference Series 2014-06-11

Abstract The CMS software stack (CMSSW) is built on a nightly basis for multiple hardware architectures and compilers, in order to benefit from the diverse platforms. In practice, still, only x86_64 binaries are used production, supported by workload management tools charge of production analysis job delivery distributed computing infrastructure. Profiting an INFN grant at CINECA, PRACE Tier-0 Center, tests have been carried using IBM Power9 nodes Marconi100 HPC system. A first study...

10.1088/1742-6596/2438/1/012031 article EN Journal of Physics Conference Series 2023-02-01

While in the business world cloud paradigm is typically implemented purchasing resources and services from third party providers (e.g. Amazon), scientific environment there's usually need of on-premises IaaS infrastructures which allow efficient usage hardware distributed among (and owned by) different administrative domains. In addition, requirement open source adoption has led to choice products like OpenStack by many organizations.

10.1088/1742-6596/664/2/022016 article EN Journal of Physics Conference Series 2015-12-23

In order to prepare the Physics Technical Design Report, due by end of 2005, CMS experiment needs simulate, reconstruct and analyse about 100 million events, corresponding more than 200 TB data. The data will be distributed several Computing Centres. provide access whole sample all world-wide dispersed physicists, is developing a layer software that uses Grid tools provided LCG project gain resources aims user friendly interface physicists submitting analysis jobs. To achieve these use from...

10.1109/nssmic.2004.1462662 article EN IEEE Symposium Conference Record Nuclear Science 2004. 2005-08-10

The Cloud Area Padovana has been running for almost two years. This is an OpenStack-based scientific cloud, spread across different sites: the INFN Padova Unit and Legnaro National Labs. hardware resources have scaled horizontally vertically, by upgrading some hypervisors adding new ones: currently it provides about 1100 cores. Some in-house developments were also integrated in OpenStack dashboard, such as a tool user project registrations with direct support INFN-AAI Identity Provider...

10.1088/1742-6596/898/5/052007 article EN Journal of Physics Conference Series 2017-10-01
Coming Soon ...