A. Sciabà

ORCID: 0000-0002-4371-1430
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • Quantum Chromodynamics and Particle Interactions
  • High-Energy Particle Collisions Research
  • Distributed and Parallel Computing Systems
  • Advanced Data Storage Technologies
  • Dark Matter and Cosmic Phenomena
  • Scientific Computing and Data Management
  • Cosmology and Gravitation Theories
  • Parallel Computing and Optimization Techniques
  • Particle Detector Development and Performance
  • Superconducting Materials and Applications
  • Neutrino Physics Research
  • Computational Physics and Python Applications
  • Cloud Computing and Resource Management
  • Atomic and Subatomic Physics Research
  • IPv6, Mobility, Handover, Networks, Security
  • Distributed systems and fault tolerance
  • Particle Accelerators and Free-Electron Lasers
  • Software-Defined Networks and 5G
  • Service-Oriented Architecture and Web Services
  • Particle accelerators and beam dynamics
  • Black Holes and Theoretical Physics
  • Cold Atom Physics and Bose-Einstein Condensates
  • Planetary Science and Exploration
  • Advanced Electrical Measurement Techniques

European Organization for Nuclear Research
2011-2024

Imperial College London
1999-2024

Scuola Normale Superiore
1996-2022

Istituto Nazionale di Fisica Nucleare, Sezione di Pisa
1995-2022

Istituto Nazionale di Fisica Nucleare
2001-2003

University of Pisa
1995-2003

The discovery by the ATLAS and CMS experiments of a new boson with mass around 125 GeV measured properties compatible those Standard-Model Higgs boson, coupled absence discoveries phenomena beyond Standard Model at TeV scale, has triggered interest in ideas for future factories. A circular e+e- collider hosted 80 to 100 km tunnel, TLEP, is among most attractive solutions proposed so far. It clean experimental environment, produces high luminosity top-quark, W Z studies, accommodates multiple...

10.1007/jhep01(2014)164 article EN cc-by Journal of High Energy Physics 2014-01-01

HEPScore is a new CPU benchmark created to replace the HEPSPEC06 that currently used by WLCG for procurement, computing resource pledges, usage accounting and performance studies. The development of benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts experiments, representatives several centres site managers. In this contribution, we review selection workloads validation benchmark.

10.1051/epjconf/202429507024 article EN cc-by EPJ Web of Conferences 2024-01-01

HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical economical. In WLCG-DOMA Access working group, members of experiments site managers have explored different models for storage strategies to reduce cost complexity, taking into account boundary conditions given by our community.Several these scenarios been evaluated quantitatively, such Data Lake model incremental improvements current computing respect resource...

10.1051/epjconf/202024504027 article EN cc-by EPJ Web of Conferences 2020-01-01

During the first run, CMS collected and processed more than 10B data events simulated 15B events. Up to 100k processor cores were used simultaneously 100PB of storage was managed. Each month petabytes moved hundreds users accessed samples. In this document we discuss operational experience from run. We present workflows flows that executed, tools services developed, operations shift models sustain system. Many techniques followed original computing planning, but some reactions difficulties...

10.1088/1742-6596/513/3/032040 article EN Journal of Physics Conference Series 2014-06-11

Modern Grid middleware is built around components providing basic functionality, such as data storage, authentication, security, job management, resource monitoring and reservation. In this paper we describe the Computing Resource Execution Management (CREAM) service. CREAM provides a Web service-based execution management capability for systems; in particular, it being used within gLite middleware. exposes service interface allowing conforming clients to submit manage computational jobs...

10.1088/1742-6596/119/6/062004 article EN Journal of Physics Conference Series 2008-07-01

The computing system of the CMS experiment works using distributed resources from more than 60 centres worldwide. These centres, located in Europe, America and Asia are interconnected by Worldwide LHC Computing Grid. operation requires a stable reliable behaviour underlying infrastructure. has established procedure to extensively test all relevant aspects Grid site, such as ability efficiently use their network transfer data, functionality site services for capability sustain various...

10.1088/1742-6596/219/6/062047 article EN Journal of Physics Conference Series 2010-04-01

The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into decision on timetable for use of (http://www.ietf.org/rfc/rfc2460.txt) networking protocols in High Energy Physics (HEP) Computing, particular Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, European Regional Internet Registry (RIR), ran out ofIPv4 addresses September 2012. North and South America RIRs are expected to run soon. In recent months it become more...

10.1088/1742-6596/513/6/062026 article EN Journal of Physics Conference Series 2014-06-11

The CMS experiment at LHC started using the Resource Broker (by EDG and LCG projects) to submit Monte Carlo production analysis jobs distributed computing resources of WLCG infrastructure over 6 years ago. Since 2006 gLite Workload Management System (WMS) Logging & Bookkeeping (LB) are used. interaction with gLite-WMS/LB happens through frameworks, respectively ProdAgent CRAB, a common component, BOSSLite. important improvements recently made in as well tools intrinsic independence different...

10.1088/1742-6596/219/6/062007 article EN Journal of Physics Conference Series 2010-04-01

Since several years the LHC experiments rely on WLCG Service Availability Monitoring framework (SAM) to run functional tests their distributed computing systems. The SAM have become an essential tool measure reliability of Grid infrastructure and ensure reliable operations, both for sites experiments. Recently old was replaced with a completely new system based Nagios ActiveMQ better support transition EGI its more model implement scalability functionality enhancements. This required all...

10.1088/1742-6596/396/3/032100 article EN Journal of Physics Conference Series 2012-12-13

The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing sites all over world and processed continuously by various central production user analysis tasks. popularity typically measured as number accesses plays an important role in resolving management issues: deleting, replicating, moving between tapes, disks caches. These procedures were still carried out a semi-manual mode now we have focused our efforts on automating it, making use historical...

10.1051/epjconf/202125102013 article EN cc-by EPJ Web of Conferences 2021-01-01

Frequent validation and stress testing of the network, storage CPU resources a grid site is essential to achieve high performance reliability. HammerCloud was previously introduced with goals enabling VO- site-administrators run such tests in an automated or on-demand manner. The ATLAS, CMS LHCb experiments have all developed VO plugins for service successfully integrated it into their operations infrastructures. This work will present experience running at full scale more than 3 years...

10.1088/1742-6596/396/3/032111 article EN Journal of Physics Conference Series 2012-12-13

Abstract The HEPiX Benchmarking Working Group has developed a framework to benchmark the performance of computational server using software applications High Energy Physics (HEP) community. This consists two main components, named HEP-Workloads and HEPscore. is collection standalone production provided by number HEP experiments. HEPscore designed run provide an overall measurement that representative computing power system. able measure systems with different processor architectures...

10.1007/s41781-021-00074-y article EN cc-by Computing and Software for Big Science 2021-12-01

The computing system of the CMS experiment uses distributed resources from more than 60 centres worldwide. Located in Europe, America and Asia, these are interconnected by Worldwide LHC Computing Grid. operation requires a stable reliable behavior underlying infrastructure. has established procedure to extensively test all relevant aspects Grid site, such as ability efficiently use their network transfer data, services for capability sustain various workflows (Monte Carlo simulation, event...

10.1109/nssmic.2008.4774771 article EN IEEE Nuclear Science Symposium conference record 2008-10-01

In the framework of a distributed computing environment, such as WLCG, monitoring has key role in order to keep under control activities going on sites located different countries and involving people based many sites. To be able cope with large scale heterogeneous infrastructure, it is necessary have tools providing complete reliable view overall performance Moreover, structure system critically depends object monitor users addressed to. this article we will describe two systems both aimed...

10.1088/1742-6596/219/6/062003 article EN Journal of Physics Conference Series 2010-04-01

The ATLAS experiment has been running continuous simulated events production since more than two years. A considerable fraction of the jobs is daily submitted and handled via gLite Workload Management System, which overcomes several limitations previous LCG Resource Broker. WMS tested very intensively for LHC experiments use cases six months, both in terms performance reliability. tests were carried out by Experiment Integration Support team (in close contact with experiments) together EGEE...

10.1088/1742-6596/119/5/052009 article EN Journal of Physics Conference Series 2008-07-01

After many years of preparation the CMS computing system has reached a situation where stability in operations limits possibility to introduce innovative features.Nevertheless it is same need and smooth that requires introduction features were considered not strategic previous phases.Examples are: adequate authorization control prioritize access storage resources; improved monitoring investigate problems identify bottlenecks on infrastructure; increased automation reduce manpower needed for...

10.1088/1742-6596/331/6/062032 article EN Journal of Physics Conference Series 2011-12-23

IPv4 network addresses are running out and the deployment of IPv6 networking in many places is now well underway. Following work HEPiX Working Group, a growing number sites Worldwide Large Hadron Collider Computing Grid (WLCG) deploying dual-stack IPv6/IPv4 services. The aim this to support use IPv6-only clients, i.e. worker nodes, virtual machines or containers.

10.1088/1742-6596/898/10/102008 article EN Journal of Physics Conference Series 2017-10-01

HammerCloud was designed and born under the needs of grid community to test resources automate operations from a user perspective. The recent developments in IT space propose shift software defined data centres, which every layer infrastructure can be offered as service.

10.1088/1742-6596/513/6/062031 article EN Journal of Physics Conference Series 2014-06-11

The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data all over the world provides CPU capacity to experiments perform processing physics analysis. In order be used experiments, these distributed should well described, which implies easy service discovery detailed description of configuration. Currently this information...

10.1088/1742-6596/898/9/092042 article EN Journal of Physics Conference Series 2017-10-01

The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs process simulation detector data. In total more than 370 PB of data is distributed over 150 sites in the WLCG. At this scale small improvements software computing performance workflows can lead tosignificant resource usage gains. reviewing together with IT experts several typical processing workloads for potential terms memory CPU usage, disk network I/O. All...

10.1051/epjconf/201921403021 article EN cc-by EPJ Web of Conferences 2019-01-01

The world is rapidly running out of IPv4 addresses; the number IPv6 end systems connected to internet increasing; WLCG and LHC experiments may soon have access worker nodes and/or virtual machines (VMs) possessing only an routable address. HEPiX Working Group has been investigating, testing planning for dual-stack services on several years. Following feedback from our working group, many storage technologies in use recently made IPv6-capable. This paper presents requirements, tests plans...

10.1088/1742-6596/664/5/052018 article EN Journal of Physics Conference Series 2015-12-23

The WLCG monitoring system solves a challenging task of keeping track the LHC computing activities on infrastructure, ensuring health and performance distributed services at more than 170 sites. challenge consists decreasing effort needed to operate service satisfy constantly growing requirements for its scalability performance. This contribution describes recent consolidation work aimed reduce complexity system, ensure effective operations, support management. was done by unifying where...

10.1088/1742-6596/664/6/062054 article EN Journal of Physics Conference Series 2015-12-23
Coming Soon ...