A. Filipčič
- Particle physics theoretical and experimental studies
- High-Energy Particle Collisions Research
- Particle Detector Development and Performance
- Quantum Chromodynamics and Particle Interactions
- Dark Matter and Cosmic Phenomena
- Computational Physics and Python Applications
- Distributed and Parallel Computing Systems
- Neutrino Physics Research
- Advanced Data Storage Technologies
- Astrophysics and Cosmic Phenomena
- Cosmology and Gravitation Theories
- Scientific Computing and Data Management
- Black Holes and Theoretical Physics
- Particle Accelerators and Free-Electron Lasers
- Parallel Computing and Optimization Techniques
- Atomic and Subatomic Physics Research
- Medical Imaging Techniques and Applications
- Radiation Detection and Scintillator Technologies
- advanced mathematical theories
- Cloud Computing and Resource Management
- Particle accelerators and beam dynamics
- Nuclear physics research studies
- Peer-to-Peer Network Technologies
- Muon and positron interactions and applications
- Advanced Frequency and Time Standards
Jožef Stefan International Postgraduate School
2017-2025
University of Ljubljana
2016-2025
Jožef Stefan Institute
2016-2025
University of Nova Gorica
2008-2024
The University of Adelaide
2019-2023
University of California, Santa Cruz
2023
Johannes Gutenberg University Mainz
2023
University of Toronto
2023
Pierre Auger Observatory
2021
Consejo Nacional de Investigaciones Científicas y Técnicas
2021
Particle physics has an ambitious and broad experimental programme for the coming decades. This requires large investments in detector hardware, either to build new facilities experiments, or upgrade existing ones. Similarly, it commensurate investment R&D of software acquire, manage, process, analyse shear amounts data be recorded. In planning HL-LHC particular, is critical that all collaborating stakeholders agree on goals priorities, efforts complement each other. this spirit, white paper...
A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer has been used over last few years many other scientific fields and by CERN itself run simulations LHC beams. The ATLAS@Home project was started allow volunteers collisions ATLAS detector. So far thousands members public have signed up contribute their spare CPU cycles ATLAS, there potential volunteer a significant...
The Production and Distributed Analysis (PanDA) system has been successfully used in the ATLAS experiment as a data-driven workload management system. PanDA proven to be capable of operating at Large Hadron Collider data processing scale over last decade including Run 1 2 taking periods. was originally designed weakly coupled with WLCG resources. Lately is revealing difficulties optimally integrate exploit new resource types such HPC preemptible cloud resources instant spin-up, workflows...
PanDA, the Atlas management and distribution system for production analysis jobs on EGEE OSG clusters, is based pilot to increase throughput stability of job execution grid. The ARC middleware uses a specific approach which tightly connects requirements with cluster capabilities like resource usage, software availability caching input files. concept renders features useless. arcControlTower submission merges benefits advantages. It takes payload from panda server submits Nordugrid clusters...
While current grid middleware implementations are quite advanced in terms of connecting jobs to resources, their client tools generally minimal and features for managing large sets left the user implement. The ARC Control Tower (aCT) is a very flexible job management framework that can be run on anything from single users laptop multi-server distributed setup. aCT was originally designed enable ATLAS submitted CE. However, with recent redesign where specific elements clearly separated parts,...
ATLAS Computing Management has identified the migration of all computing resources to Harvester, PanDA’s new workload submission engine, as a critical milestone for LHC Run 3 and 4. This contribution will focus on Grid Harvester. We have built redundant architecture based CERN IT’s common offerings (e.g. Openstack Virtual Machines Database Demand) run necessary Harvester HTCondor services, capable sustaining load O(1M) workers per day. reviewed region by moved much possible away from blind...
With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means add much-needed extra power. These can be very different in design from those that comprise the Grid of most experiments, therefore exploiting them requires change strategy for experiment. They may highly restrictive what run or connections outside world, tolerate usage only on condition tasks terminated without warning. The Advanced Resource Connector Computing...
After the successful first run of LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading increased volumes and event complexity. In order process generated such scenario exploit multicore architectures current CPUs, LHC experiments have developed parallelized software for reconstruction simulation. However, a good fraction their computing effort still expected be executed as single-core tasks. Therefore, jobs diverse resources requirements will...
The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs process simulation detector data. In total more than 370 PB of data is distributed over 150 sites in the WLCG. At this scale small improvements software computing performance workflows can lead tosignificant resource usage gains. reviewing together with IT experts several typical processing workloads for potential terms memory CPU usage, disk network I/O. All...
Abstract The prompt reconstruction of the data recorded from Large Hadron Collider (LHC) detectors has always been addressed by dedicated resources at CERN Tier-0. Such workloads come in spikes due to nature operation accelerator and special high load occasions experiments have commissioned methods distribute (spill-over) a fraction sites outside CERN. present work demonstrates new way supporting Tier-0 environment provisioning elastically for such spilled-over workflows onto Piz Daint...
ATLAS@Home is a volunteer computing project which allows the public to contribute for ATLAS experiment through their home or office computers. The has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. combined volunteers' resources make up sizeable fraction of overall simulation. This paper takes stock experience gained so far describes next steps evolution project. These improvements include running natively on Linux ease deployment example...
Since 2010 the Production and Distributed Analysis system (PanDA) for ATLAS experiment at Large Hadron Colliderhas seen big changes to accommodate new types of distributed computing resources: clouds, HPCs, volunteer computers other external resources. While PanDA was originally designed fairly homogeneous resources available through Worldwide LHC Computing Grid, are heterogeneous, diverse scales with interfaces. Up a fifth such require special techniques integration into PanDA. In this...
Distributed computing resources available for high-energy physics research are becoming less dedicated to one type of workflow and researchers workloads increasingly exploiting modern technologies such as parallelism. The current pilot job management model used by many experiments relies on static cannot easily adapt these changes. ATLAS in Nordic countries some other places enables a flexible system based dynamic allocation. Rather than fixed set managed centrally, the allows be requested...
ARC Compute Element is becoming more popular in WLCG and EGI infrastructures, being used not only the Grid context, but also as an interface to HPC Cloud resources.It strongly relies on community contributions, which helps keeping up with changes distributed computing landscape.Future plans are closely linked needs of LHC computing, whichever shape it may take.There numerous examples usage for smaller research communities through national infrastructure projects different countries.As such,...
In the past few years increased luminosity of LHC, changes in linux kernel and a move to 64bit architecture have affected ATLAS jobs memory usage workload management system had be adapted more flexible pass parameters batch systems, which wasn't necessity. This paper describes steps required add capability better handle requirements, included review how each component definition parametrization is mapped other components, what applied make submission chain work. These go from tasks way...
ATLAS is one of the four experiments collecting data from proton-proton collisions at Large Hadron Collider. The offline processing and storage handled by a custom heterogenous distributed computing system. This paper summarizes some challenges operations-driven solutions introduced in