- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Particle Detector Development and Performance
- Quantum Chromodynamics and Particle Interactions
- Dark Matter and Cosmic Phenomena
- Computational Physics and Python Applications
- Neutrino Physics Research
- Distributed and Parallel Computing Systems
- Cosmology and Gravitation Theories
- Advanced Data Storage Technologies
- Scientific Computing and Data Management
- Radiation Detection and Scintillator Technologies
- Medical Imaging Techniques and Applications
- advanced mathematical theories
- Atomic and Subatomic Physics Research
- Legal and Labor Studies
- Black Holes and Theoretical Physics
- Astrophysics and Cosmic Phenomena
- Parallel Computing and Optimization Techniques
- Big Data Technologies and Applications
- Structural Analysis of Composite Materials
- Human Rights and Immigration
- Power Systems and Technologies
- Diverse academic and cultural studies
- Digital Radiography and Breast Imaging
University of Manchester
2012-2025
The Abdus Salam International Centre for Theoretical Physics (ICTP)
2020-2024
Istituto Nazionale di Fisica Nucleare, Sezione di Trieste
2023-2024
Istituto Nazionale di Fisica Nucleare, Gruppo Collegato di Udine
2023-2024
Heidelberg University
2024
A. Alikhanyan National Laboratory
2024
SR Research (Canada)
2024
Federación Española de Enfermedades Raras
2024
University of Göttingen
2023-2024
Atlas Scientific (United States)
2024
Machine learning has been applied to several problems in particle physics research, beginning with applications high-level analysis the 1990s and 2000s, followed by an explosion of event identification reconstruction 2010s. In this document we discuss promising future research development areas for machine physics. We detail a roadmap their implementation, software hardware resource requirements, collaborative initiatives data science community, academia industry, training community science....
Modern experiments collect peta-scale volumes of data and utilize vast, geographically distributed computing infrastructure that serves thousands scientists around the world. Requirements for rapid, near real-time processing, fast analysis cycles need to run massive detector simulations support pose special premium on efficient use available computational resources. A sophisticated Workload Management System (WMS) is needed coordinate distribution processing jobs in such environment. The...
Since its earliest days, the Worldwide LHC Computational Grid (WLCG) has relied on GridFTP to transfer data between sites. The announcement that Globus is dropping support of open source Toolkit (GT), which forms basis for several FTP client and servers, created an opportunity reevaluate use FTP. HTTP-TPC, extension HTTP compatible with WebDAV, arisen as a strong contender alternative approach. In this paper, we describe HTTP-TPC protocol itself, along current status in different...
The Worldwide LHC Computing Grid infrastructure links about 200 participating computing centres affiliated with several partner projects. It is built by integrating heterogeneous computer and storage resources in diverse data all over the world provides CPU capacity to experiments perform processing physics analysis. In order be used experiments, these distributed should well described, which implies easy service discovery detailed description of configuration. Currently this information...
Pressures from both WLCG VOs and externalities have led to a desire "simplify" data access handling for Tier-2 resources across the Grid. This has mostly been imagined in terms of reducing book-keeping VOs, total replicas needed sites. One common direction motion is increasing amount remote-access jobs, which also seen as enabling development administratively-cheaper subcat-egories, manpower equipment costs. Caching technologies are often "cheap" way ameliorate increased latency (and...
This paper describes the deployment of offline software ATLAS experiment at LHC in containers for use production workflows such as simulation and reconstruction. To achieve this goal we are using Docker Singularity, which both lightweight virtualization technologies that can encapsulate packages inside complete file systems. The releases via removes interdependence between runtime environment needed job execution configuration computing nodes sites. or Singularity would provide a uniform...
We describe an integrated management system using third-party, open source components used in operating a large Tier-2 site for particle physics. This tracks individual assets and records their attributes such as MAC IP addresses; derives DNS DHCP configurations from this database; creates each host's installation re-configuration scripts; monitors the services on host according to of what should be running; cross references tickets with asset per-asset monitoring pages. In addition, scripts...
Description of ATLAS Distributed Computing Operations Shift (ADCoS) duties and available monitoring tools along with the operational experience is presented in this paper.
Following the outbreak of COVID–19 pandemic, ATLAS experiment considered how it could most efficiently contribute using its distributed computing resources. After considering many suggestions, examining several potential projects and following advice CERN COVID Task Force, was decided to engage in Folding@Home initiative, which provides payloads that perform protein folding simulations. This paper describes made a significant contribution this project over summer 2020.
The ATLAS Trigger and data acquisition system has been designed to use more than 2000 CPUs. During the current development stage it is crucial test on a number of CPUs similar scale. A dedicated farm this size difficult find, can only be made available for short periods. On other hand many large farms have become recently as part computing grids, leading idea using them TDAQ ATLAS. However task adapting run Grid not trivial, requires full access resources runs real-time interaction. Moreover...
The ATLAS trigger and data acquisition system has been designed to use more than 2000 CPUs. During the current development stage it is crucial test on a number of CPUs similar scale. A dedicated farm this size difficult find, can only be made available for short periods. On other hand many large farms have become recently as part computing grids, leading idea using them TDAQ ATLAS. However task adapting run Grid not trivial, requires full access resources runs real-time interaction. Moreover...
The ATLAS experiment’s software production and distribution on the grid benefits from a semi-automated infrastructure that provides up-to-date information about usability availability through CVMFS service for all relevant systems. development process uses Continuous Integration pipeline involving testing, validation, packaging installation steps. For opportunistic sites can not access CVMFS, containerized releases are needed. These standalone containers currently created manually to support...
The High Luminosity LHC project at CERN, which is expected to deliver a ten-fold increase in the luminosity of proton-proton collisions over LHC, will start operation towards end this decade and an unprecedented scientific data volume multi-exabyte scale. This vast amount has be processed analysed, corresponding computing facilities must ensure fast reliable processing for physics analyses by groups distributed all world. present model not able provide required infrastructure growth, even...
The 23 rd International Conference on Computing in High Energy and Nuclear Physics (CHEP) took place the National Palace of Culture, Sofia, Bulgaria from 9 th to 13 July 2018. 575 participants joined plenary eight parallel sessions dedicated to: online computing; offline distributed data handling; software development; machine learning physics analysis; clouds, virtualisation containers; networks facilities. conference hosted 35 presentations, 323 presentations 188 posters.
We currently have an opportunity and a need to migrate the community’s data movement protocols given where GridFTP is in its lifecycle. For several years, there’s been ongoing work develop HTTP meet our needs. Our small HEP community can leverage global effort make HTTPS performant, interoperable, ubiquitous. This builds on common interpretation of WebDAV standards, evolving into HTTP-TPC. used last 12 months greatly mature implementations integration with storage software HEP. All major...
The WLCG project aimed to develop, build, and maintain a global computing facility for storage analysis of the LHC data. While currently most resources are being provided by classical grid sites, over last years experiments have been using more public clouds HPCs, this trend will certainly continue. heterogeneity is not limited procurement mode. It also implies variety solutions types computer architecture which represent new challenges topology configuration description resources....