A. Filipčič

ORCID: 0000-0001-5671-1555
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Particle Detector Development and Performance
  • Quantum Chromodynamics and Particle Interactions
  • Dark Matter and Cosmic Phenomena
  • Computational Physics and Python Applications
  • Distributed and Parallel Computing Systems
  • Neutrino Physics Research
  • Advanced Data Storage Technologies
  • Astrophysics and Cosmic Phenomena
  • Cosmology and Gravitation Theories
  • Scientific Computing and Data Management
  • Black Holes and Theoretical Physics
  • Particle Accelerators and Free-Electron Lasers
  • Parallel Computing and Optimization Techniques
  • Atomic and Subatomic Physics Research
  • Medical Imaging Techniques and Applications
  • Radiation Detection and Scintillator Technologies
  • advanced mathematical theories
  • Cloud Computing and Resource Management
  • Particle accelerators and beam dynamics
  • Nuclear physics research studies
  • Peer-to-Peer Network Technologies
  • Muon and positron interactions and applications
  • Advanced Frequency and Time Standards

Jožef Stefan International Postgraduate School
2017-2025

University of Ljubljana
2016-2025

Jožef Stefan Institute
2016-2025

University of Nova Gorica
2008-2024

The University of Adelaide
2019-2023

University of California, Santa Cruz
2023

Johannes Gutenberg University Mainz
2023

University of Toronto
2023

Pierre Auger Observatory
2021

Consejo Nacional de Investigaciones Científicas y Técnicas
2021

Particle physics has an ambitious and broad experimental programme for the coming decades. This requires large investments in detector hardware, either to build new facilities experiments, or upgrade existing ones. Similarly, it commensurate investment R&D of software acquire, manage, process, analyse shear amounts data be recorded. In planning HL-LHC particular, is critical that all collaborating stakeholders agree on goals priorities, efforts complement each other. this spirit, white paper...

10.1007/s41781-018-0018-8 article EN cc-by Computing and Software for Big Science 2019-03-20

A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer has been used over last few years many other scientific fields and by CERN itself run simulations LHC beams. The ATLAS@Home project was started allow volunteers collisions ATLAS detector. So far thousands members public have signed up contribute their spare CPU cycles ATLAS, there potential volunteer a significant...

10.1088/1742-6596/664/2/022009 article EN Journal of Physics Conference Series 2015-12-23

The Production and Distributed Analysis (PanDA) system has been successfully used in the ATLAS experiment as a data-driven workload management system. PanDA proven to be capable of operating at Large Hadron Collider data processing scale over last decade including Run 1 2 taking periods. was originally designed weakly coupled with WLCG resources. Lately is revealing difficulties optimally integrate exploit new resource types such HPC preemptible cloud resources instant spin-up, workflows...

10.1051/epjconf/201921403030 article EN cc-by EPJ Web of Conferences 2019-01-01

PanDA, the Atlas management and distribution system for production analysis jobs on EGEE OSG clusters, is based pilot to increase throughput stability of job execution grid. The ARC middleware uses a specific approach which tightly connects requirements with cluster capabilities like resource usage, software availability caching input files. concept renders features useless. arcControlTower submission merges benefits advantages. It takes payload from panda server submits Nordugrid clusters...

10.1088/1742-6596/331/7/072013 article EN Journal of Physics Conference Series 2011-12-23

While current grid middleware implementations are quite advanced in terms of connecting jobs to resources, their client tools generally minimal and features for managing large sets left the user implement. The ARC Control Tower (aCT) is a very flexible job management framework that can be run on anything from single users laptop multi-server distributed setup. aCT was originally designed enable ATLAS submitted CE. However, with recent redesign where specific elements clearly separated parts,...

10.1088/1742-6596/664/6/062042 article EN Journal of Physics Conference Series 2015-12-23

ATLAS Computing Management has identified the migration of all computing resources to Harvester, PanDA’s new workload submission engine, as a critical milestone for LHC Run 3 and 4. This contribution will focus on Grid Harvester. We have built redundant architecture based CERN IT’s common offerings (e.g. Openstack Virtual Machines Database Demand) run necessary Harvester HTCondor services, capable sustaining load O(1M) workers per day. reviewed region by moved much possible away from blind...

10.1051/epjconf/202024503010 article EN cc-by EPJ Web of Conferences 2020-01-01

With ever-greater computing needs and fixed budgets, big scientific experiments are turning to opportunistic resources as a means add much-needed extra power. These can be very different in design from those that comprise the Grid of most experiments, therefore exploiting them requires change strategy for experiment. They may highly restrictive what run or connections outside world, tolerate usage only on condition tasks terminated without warning. The Advanced Resource Connector Computing...

10.1088/1742-6596/898/5/052010 article EN Journal of Physics Conference Series 2017-10-01

After the successful first run of LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading increased volumes and event complexity. In order process generated such scenario exploit multicore architectures current CPUs, LHC experiments have developed parallelized software for reconstruction simulation. However, a good fraction their computing effort still expected be executed as single-core tasks. Therefore, jobs diverse resources requirements will...

10.1088/1742-6596/664/6/062016 article EN Journal of Physics Conference Series 2015-12-23

The CERN ATLAS experiment grid workflow system manages routinely 250 to 500 thousand concurrently running production and analysis jobs process simulation detector data. In total more than 370 PB of data is distributed over 150 sites in the WLCG. At this scale small improvements software computing performance workflows can lead tosignificant resource usage gains. reviewing together with IT experts several typical processing workloads for potential terms memory CPU usage, disk network I/O. All...

10.1051/epjconf/201921403021 article EN cc-by EPJ Web of Conferences 2019-01-01

Abstract The prompt reconstruction of the data recorded from Large Hadron Collider (LHC) detectors has always been addressed by dedicated resources at CERN Tier-0. Such workloads come in spikes due to nature operation accelerator and special high load occasions experiments have commissioned methods distribute (spill-over) a fraction sites outside CERN. present work demonstrates new way supporting Tier-0 environment provisioning elastically for such spilled-over workflows onto Piz Daint...

10.1007/s41781-020-00052-w article EN cc-by Computing and Software for Big Science 2021-02-08

ATLAS@Home is a volunteer computing project which allows the public to contribute for ATLAS experiment through their home or office computers. The has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. combined volunteers' resources make up sizeable fraction of overall simulation. This paper takes stock experience gained so far describes next steps evolution project. These improvements include running natively on Linux ease deployment example...

10.1088/1742-6596/898/5/052009 article EN Journal of Physics Conference Series 2017-10-01

Since 2010 the Production and Distributed Analysis system (PanDA) for ATLAS experiment at Large Hadron Colliderhas seen big changes to accommodate new types of distributed computing resources: clouds, HPCs, volunteer computers other external resources. While PanDA was originally designed fairly homogeneous resources available through Worldwide LHC Computing Grid, are heterogeneous, diverse scales with interfaces. Up a fifth such require special techniques integration into PanDA. In this...

10.1051/epjconf/201921403047 article EN cc-by EPJ Web of Conferences 2019-01-01

Distributed computing resources available for high-energy physics research are becoming less dedicated to one type of workflow and researchers workloads increasingly exploiting modern technologies such as parallelism. The current pilot job management model used by many experiments relies on static cannot easily adapt these changes. ATLAS in Nordic countries some other places enables a flexible system based dynamic allocation. Rather than fixed set managed centrally, the allows be requested...

10.1088/1742-6596/664/6/062015 article EN Journal of Physics Conference Series 2015-12-23

ARC Compute Element is becoming more popular in WLCG and EGI infrastructures, being used not only the Grid context, but also as an interface to HPC Cloud resources.It strongly relies on community contributions, which helps keeping up with changes distributed computing landscape.Future plans are closely linked needs of LHC computing, whichever shape it may take.There numerous examples usage for smaller research communities through national infrastructure projects different countries.As such,...

10.20537/2076-7633-2015-7-3-407-414 article EN cc-by-nd Computer Research and Modeling 2015-06-01

In the past few years increased luminosity of LHC, changes in linux kernel and a move to 64bit architecture have affected ATLAS jobs memory usage workload management system had be adapted more flexible pass parameters batch systems, which wasn't necessity. This paper describes steps required add capability better handle requirements, included review how each component definition parametrization is mapped other components, what applied make submission chain work. These go from tasks way...

10.1088/1742-6596/898/5/052004 article EN Journal of Physics Conference Series 2017-10-01

ATLAS is one of the four experiments collecting data from proton-proton collisions at Large Hadron Collider. The offline processing and storage handled by a custom heterogenous distributed computing system. This paper summarizes some challenges operations-driven solutions introduced in

10.1051/epjconf/201921403049 article EN cc-by EPJ Web of Conferences 2019-01-01
Coming Soon ...