A. C. Forti

ORCID: 0000-0002-0532-7921
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Particle Detector Development and Performance
  • Quantum Chromodynamics and Particle Interactions
  • Dark Matter and Cosmic Phenomena
  • Computational Physics and Python Applications
  • Neutrino Physics Research
  • Distributed and Parallel Computing Systems
  • Cosmology and Gravitation Theories
  • Advanced Data Storage Technologies
  • Scientific Computing and Data Management
  • Radiation Detection and Scintillator Technologies
  • Medical Imaging Techniques and Applications
  • advanced mathematical theories
  • Atomic and Subatomic Physics Research
  • Legal and Labor Studies
  • Black Holes and Theoretical Physics
  • Astrophysics and Cosmic Phenomena
  • Parallel Computing and Optimization Techniques
  • Big Data Technologies and Applications
  • Structural Analysis of Composite Materials
  • Human Rights and Immigration
  • Power Systems and Technologies
  • Diverse academic and cultural studies
  • Digital Radiography and Breast Imaging

University of Manchester
2012-2025

The Abdus Salam International Centre for Theoretical Physics (ICTP)
2020-2024

Istituto Nazionale di Fisica Nucleare, Sezione di Trieste
2023-2024

Istituto Nazionale di Fisica Nucleare, Gruppo Collegato di Udine
2023-2024

Heidelberg University
2024

A. Alikhanyan National Laboratory
2024

SR Research (Canada)
2024

Federación Española de Enfermedades Raras
2024

University of Göttingen
2023-2024

Atlas Scientific (United States)
2024

Machine learning has been applied to several problems in particle physics research, beginning with applications high-level analysis the 1990s and 2000s, followed by an explosion of event identification reconstruction 2010s. In this document we discuss promising future research development areas for machine physics. We detail a roadmap their implementation, software hardware resource requirements, collaborative initiatives data science community, academia industry, training community science....

10.48550/arxiv.1807.02876 preprint EN other-oa arXiv (Cornell University) 2018-01-01

Modern experiments collect peta-scale volumes of data and utilize vast, geographically distributed computing infrastructure that serves thousands scientists around the world. Requirements for rapid, near real-time processing, fast analysis cycles need to run massive detector simulations support pose special premium on efficient use available computational resources. A sophisticated Workload Management System (WMS) is needed coordinate distribution processing jobs in such environment. The...

10.1051/epjconf/201921403050 article EN cc-by EPJ Web of Conferences 2019-01-01

Since its earliest days, the Worldwide LHC Computational Grid (WLCG) has relied on GridFTP to transfer data between sites. The announcement that Globus is dropping support of open source Toolkit (GT), which forms basis for several FTP client and servers, created an opportunity reevaluate use FTP. HTTP-TPC, extension HTTP compatible with WebDAV, arisen as a strong contender alternative approach. In this paper, we describe HTTP-TPC protocol itself, along current status in different...

10.1051/epjconf/202024504031 article EN cc-by EPJ Web of Conferences 2020-01-01

The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data all over the world provides CPU capacity to experiments perform processing physics analysis. In order be used experiments, these distributed should well described, which implies easy service discovery detailed description of configuration. Currently this information...

10.1088/1742-6596/898/9/092042 article EN Journal of Physics Conference Series 2017-10-01

Pressures from both WLCG VOs and externalities have led to a desire "simplify" data access handling for Tier-2 resources across the Grid. This has mostly been imagined in terms of reducing book-keeping VOs, total replicas needed sites. One common direction motion is increasing amount remote-access jobs, which also seen as enabling development administratively-cheaper subcat-egories, manpower equipment costs. Caching technologies are often "cheap" way ameliorate increased latency (and...

10.1051/epjconf/201921404002 article EN cc-by EPJ Web of Conferences 2019-01-01

This paper describes the deployment of offline software ATLAS experiment at LHC in containers for use production workflows such as simulation and reconstruction. To achieve this goal we are using Docker Singularity, which both lightweight virtualization technologies that can encapsulate packages inside complete file systems. The releases via removes interdependence between runtime environment needed job execution configuration computing nodes sites. or Singularity would provide a uniform...

10.1051/epjconf/202024507010 article EN cc-by EPJ Web of Conferences 2020-01-01

We describe an integrated management system using third-party, open source components used in operating a large Tier-2 site for particle physics. This tracks individual assets and records their attributes such as MAC IP addresses; derives DNS DHCP configurations from this database; creates each host's installation re-configuration scripts; monitors the services on host according to of what should be running; cross references tickets with asset per-asset monitoring pages. In addition, scripts...

10.1088/1742-6596/396/4/042039 article EN Journal of Physics Conference Series 2012-12-13

Description of ATLAS Distributed Computing Operations Shift (ADCoS) duties and available monitoring tools along with the operational experience is presented in this paper.

10.1088/1742-6596/331/7/072045 article EN Journal of Physics Conference Series 2011-12-23

Following the outbreak of COVID–19 pandemic, ATLAS experiment considered how it could most efficiently contribute using its distributed computing resources. After considering many suggestions, examining several potential projects and following advice CERN COVID Task Force, was decided to engage in Folding@Home initiative, which provides payloads that perform protein folding simulations. This paper describes made a significant contribution this project over summer 2020.

10.1051/epjconf/202125102003 article EN cc-by EPJ Web of Conferences 2021-01-01

The ATLAS Trigger and data acquisition system has been designed to use more than 2000 CPUs. During the current development stage it is crucial test on a number of CPUs similar scale. A dedicated farm this size difficult find, can only be made available for short periods. On other hand many large farms have become recently as part computing grids, leading idea using them TDAQ ATLAS. However task adapting run Grid not trivial, requires full access resources runs real-time interaction. Moreover...

10.1109/tns.2007.905169 article EN IEEE Transactions on Nuclear Science 2007-10-01

The ATLAS trigger and data acquisition system has been designed to use more than 2000 CPUs. During the current development stage it is crucial test on a number of CPUs similar scale. A dedicated farm this size difficult find, can only be made available for short periods. On other hand many large farms have become recently as part computing grids, leading idea using them TDAQ ATLAS. However task adapting run Grid not trivial, requires full access resources runs real-time interaction. Moreover...

10.1109/rtc.2007.4382817 article EN 2007-04-01

The ATLAS experiment’s software production and distribution on the grid benefits from a semi-automated infrastructure that provides up-to-date information about usability availability through CVMFS service for all relevant systems. development process uses Continuous Integration pipeline involving testing, validation, packaging installation steps. For opportunistic sites can not access CVMFS, containerized releases are needed. These standalone containers currently created manually to support...

10.1051/epjconf/202125102017 article EN cc-by EPJ Web of Conferences 2021-01-01

The High Luminosity LHC project at CERN, which is expected to deliver a ten-fold increase in the luminosity of proton-proton collisions over LHC, will start operation towards end this decade and an unprecedented scientific data volume multi-exabyte scale. This vast amount has be processed analysed, corresponding computing facilities must ensure fast reliable processing for physics analyses by groups distributed all world. present model not able provide required infrastructure growth, even...

10.1051/epjconf/202125102002 article EN cc-by EPJ Web of Conferences 2021-01-01

The 23 rd International Conference on Computing in High Energy and Nuclear Physics (CHEP) took place the National Palace of Culture, Sofia, Bulgaria from 9 th to 13 July 2018. 575 participants joined plenary eight parallel sessions dedicated to: online computing; offline distributed data handling; software development; machine learning physics analysis; clouds, virtualisation containers; networks facilities. conference hosted 35 presentations, 323 presentations 188 posters.

10.1051/epjconf/201921400001 article EN cc-by EPJ Web of Conferences 2019-01-01

We currently have an opportunity and a need to migrate the community’s data movement protocols given where GridFTP is in its lifecycle. For several years, there’s been ongoing work develop HTTP meet our needs. Our small HEP community can leverage global effort make HTTPS performant, interoperable, ubiquitous. This builds on common interpretation of WebDAV standards, evolving into HTTP-TPC. used last 12 months greatly mature implementations integration with storage software HEP. All major...

10.2172/1633093 article EN 2019-11-05

The WLCG project aimed to develop, build, and maintain a global computing facility for storage analysis of the LHC data. While currently most resources are being provided by classical grid sites, over last years experiments have been using more public clouds HPCs, this trend will certainly continue. heterogeneity is not limited procurement mode. It also implies variety solutions types computer architecture which represent new challenges topology configuration description resources....

10.1051/epjconf/202024503029 article EN cc-by EPJ Web of Conferences 2020-01-01
Coming Soon ...