Domenico Giordano

ORCID: 0000-0002-9789-3188
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Distributed and Parallel Computing Systems
  • Advanced Data Storage Technologies
  • Scientific Computing and Data Management
  • Parallel Computing and Optimization Techniques
  • Particle physics theoretical and experimental studies
  • Distributed systems and fault tolerance
  • Big Data Technologies and Applications
  • Particle Detector Development and Performance
  • Cloud Computing and Resource Management
  • Radiation Detection and Scintillator Technologies
  • Peer-to-Peer Network Technologies
  • Anomaly Detection Techniques and Applications
  • 3D Surveying and Cultural Heritage
  • Augmented Reality Applications
  • Wireless Sensor Networks for Data Analysis
  • Magnetic Field Sensors Techniques
  • Software System Performance and Reliability
  • Data Stream Mining Techniques

European Organization for Nuclear Research
2012-2024

Benchmark Research (United States)
2019

Istituto Nazionale di Fisica Nucleare, Sezione di Bari
2008-2009

HEPScore is a new CPU benchmark created to replace the HEPSPEC06 that currently used by WLCG for procurement, computing resource pledges, usage accounting and performance studies. The development of benchmark, based on HEP applications or workloads, has involved many contributions from software developers, data analysts, experts experiments, representatives several centres site managers. In this contribution, we review selection workloads validation benchmark.

10.1051/epjconf/202429507024 article EN cc-by EPJ Web of Conferences 2024-01-01

During the first two years of data taking, CMS experiment has collected over 20 PetaBytes and processed analyzed it on distributed, multi-tiered computing infrastructure WorldWide LHC Computing Grid. Given increasing volume that to be stored efficiently analyzed, is a challenge for several experiments optimize automate placement strategies in order fully profit available network storage resources facilitate daily operations. Building previous experience acquired by ATLAS, we have developed...

10.1088/1742-6596/396/3/032047 article EN Journal of Physics Conference Series 2012-12-13

The ATLAS experiment at the LHC has successfully incorporated cloud computing technology and resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature transition stable production systems, while ongoing evolutionary changes are still needed adapt refine approaches used, in response prevailing technology. In addition, completely new developments handle emerging requirements. This paper describes overall evolution ATLAS. current status...

10.1088/1742-6596/664/2/022038 article EN Journal of Physics Conference Series 2015-12-23

Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period consolidation, characterized by building upon previously established systems, with aim reducing operational effort, improving robustness, and reaching higher scale. This paper describes current state computing. Cloud activities are converging on common contextualization approach for virtual machines, resources sharing monitoring service discovery components. We describe integration Vacuum resources,...

10.1088/1742-6596/898/5/052008 article EN Journal of Physics Conference Series 2017-10-01

During the LHC Run-1 data taking, all experiments collected large volumes from proton-proton and heavy-ion collisions. The collisions data, together with massive of simulated were replicated in multiple copies, transferred among various Tier levels, transformed/slimmed format/content. These then accessed (both locally remotely) by groups distributed analysis communities exploiting WorldWide Computing Grid infrastructure services. While efficient placement strategies - optimal redistribution...

10.1088/1742-6596/664/3/032003 article EN Journal of Physics Conference Series 2015-12-23

Abstract The HEPiX Benchmarking Working Group has developed a framework to benchmark the performance of computational server using software applications High Energy Physics (HEP) community. This consists two main components, named HEP-Workloads and HEPscore. is collection standalone production provided by number HEP experiments. HEPscore designed run provide an overall measurement that representative computing power system. able measure systems with different processor architectures...

10.1007/s41781-021-00074-y article EN cc-by Computing and Software for Big Science 2021-12-01

In this document we summarize the experience acquired in execution of Phase 1. The objective 1 is “[...] to achieve an informal technical evaluation Azure and Batch understand technology, APIs, ease-of-use, any obvious barrier integration with existing WLCG tools processes. [...] resulting feedback will give both parties a chance assess basic feasibility for more advanced activities” [1].

10.5281/zenodo.48495 article EN 2016-03-29

As of 2009, HEP-SPEC06 (HS06) is the benchmark adopted by WLCG community to describe computing requirements LHC experiments, assess capacity data centres and procure new hardware. In recent years, following evolution CPU architectures adoption programming paradigms, such as multi-threading vectorization, it has turned out that HS06 less representative relevant applications running on infrastructure. Meanwhile, in 2017 a SPEC generation benchmarks for intensive workloads been released: 2017....

10.1051/epjconf/201921408011 article EN cc-by EPJ Web of Conferences 2019-01-01

Helix Nebula - the Science Cloud Initiative is a public-private partnership between Europe's leading scientific research organisations and European IT cloud providers. CERN contributed to this initiative by providing flagship use case: workloads from ATLAS experiment. Aiming gain experience in managing monitoring large-scale deployments, as well benchmarking resources, sizable Monte Carlo production was performed using platform. This contribution describes summarizes lessons learned...

10.1088/1742-6596/664/2/022019 article EN Journal of Physics Conference Series 2015-12-23

After two years of LHC data taking, processing and analysis with numerous changes in computing technology, a number aspects the experiments' computing, as well WLCG deployment operations, need to evolve. As part activities Experiment Support group CERN's IT department, reinforced by effort from EGI-InSPIRE project, we present work aimed at common solutions across all experiments. Such allow us not only optimize development manpower but also offer lower long-term maintenance support costs....

10.1088/1742-6596/396/3/032048 article EN Journal of Physics Conference Series 2012-12-13

In the last two years of Large Hadron Collider (LHC) [1] operation, experiments have made a considerable usage Grid resources for data storage and offline analysis.To achieve successful exploitation these significant operational human effort has been put in place it is moment to improve available infrastructure.In this respect, Compact Muon Solenoid (CMS) [2] Popularity project aims track experiment's access patterns (frequency access, protocols, users, sites CPU), providing base automation...

10.22323/1.162.0107 article EN cc-by-nc-sa 2012-12-04

Benchmarking of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) suite for over a decade. It recently become clear that HS06, which is real applications from non-HEP domains, no longer describes typical HEP workloads. The aim HEP-Benchmarks project to develop new benchmark compute resources, LHC experiments. By construction, these benchmarks are thus guaranteed have score highly correlated throughputs applications, and usage pattern similar theirs. Linux containers CernVM-FS...

10.1051/epjconf/202024507035 article EN cc-by EPJ Web of Conferences 2020-01-01

Anomaly detection in the CERN OpenStack cloud is a challenging task due to large scale of computing infrastructure and, consequently, volume monitoring data analyse. The current solution spot anomalous servers relies on threshold-based alarming system carefully set by managers performance metrics each infrastructure’s component. This contribution explores fully automated, unsupervised machine learning solutions anomaly field for time series metrics, adapting both traditional and deep...

10.1051/epjconf/202125102011 article EN cc-by EPJ Web of Conferences 2021-01-01

As a joint effort from various communities involved in the Worldwide LHC Computing Grid, Operational Intelligence project aims at increasing level of automation computing operations and reducing human interventions. The distributed systems currently deployed by experiments have proven to be mature capable meeting experimental goals, allowing timely delivery scientific results. However, substantial number interventions software developers, shifters, operational teams is needed efficiently...

10.3389/fdata.2021.753409 article EN cc-by Frontiers in Big Data 2022-01-07

The increase in the scale of LHC computing expected for Run 3 and even more so 4 (HL-LHC) over next ten years will certainly require radical changes to models data processing experiments. Translating requirements physics programmes into resource needs is a complicated process subject significant uncertainties. For this reason, WLCG has established working group develop methodologies tools intended tocharacterise workloads, better understand their interaction with infrastructure, calculate...

10.1051/epjconf/201921403019 article EN cc-by EPJ Web of Conferences 2019-01-01

The ongoing integration of clouds into the WLCG raises need for detailed health and performance monitoring virtual resources in order to prevent problems degraded service interruptions due undetected failures. When working scale, existing diversity can lead a metric overflow whereby operators manually collect correlate data from several tools frameworks, resulting tens different metrics be constantly interpreted analyzed per machine. In this paper we present an ESPER based standalone...

10.1088/1742-6596/898/4/042020 article EN Journal of Physics Conference Series 2017-10-01

The event display and data quality monitoring visualisation systems are especially crucial for commissioning CMS in the imminent physics run at LHC. They have already proved invaluable magnet test cosmic challenge. We describe how these used to navigate filter immense amounts of complex from detector prepare clear flexible views salient features shift crews offline users. These allow staff experts a top-level general view very specific elements real time help validate ascertain causes...

10.1088/1742-6596/119/3/032031 article EN Journal of Physics Conference Series 2008-07-01

The increase in the scale of LHC computing during Run 3 and 4 (HL-LHC) will certainly require radical changes to models data processing experiments. working group established by WLCG HEP Software Foundation investigate all aspects cost how optimise them has continued producing results improving our understanding this process. In particular, experiments have developed more sophisticated ways calculate their resource needs, we a much detailed process infrastructure costs. This includes studies...

10.1051/epjconf/202024503014 article EN cc-by EPJ Web of Conferences 2020-01-01

ATLAS, CERN-IT, and CMS embarked on a project to develop common system for analysis workflow management, resource provisioning job scheduling. This distributed computing infrastructure was based elements of PanDA prior tools. After an extensive feasibility study development proof-of-concept prototype, the now has basic that supports use cases both experiments via services. In this paper we will discuss state current solution give overview all components system.

10.1088/1742-6596/513/3/032064 article EN Journal of Physics Conference Series 2014-06-11

The adoption of cloud technologies by the LHC experiments places fabric management burden monitoring virtualized resources upon VO. In addition to status virtual machines and triaging results, it must be understood if actually provided match with any agreements relating supply. Monitoring instantiated is therefore a fundamental activity hence this paper describes how Ganglia system can used for computing experiments. Expanding this, then shown integral time-series data obtained re-purposed...

10.1088/1742-6596/664/2/022013 article EN Journal of Physics Conference Series 2015-12-23

The CMS Silicon Strip Tracker (SST) is the largest detector of this kind ever built for a high energy physics experiment.It consists more than ten millions analog read-out channels, split between 15,148 modules.To ensure that SST performance fully meets requirements experiment, precisely calibrated and constantly monitored to identify, at very early stage, any possible problem both in data acquisition reconstruction chain.Due its granularity, operation challenging task.In paper we describe...

10.22323/1.068.0043 article EN cc-by-nc-sa 2009-07-03
Coming Soon ...