J. Schovancova

ORCID: 0000-0003-0016-5246
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Particle physics theoretical and experimental studies
  • High-Energy Particle Collisions Research
  • Particle Detector Development and Performance
  • Quantum Chromodynamics and Particle Interactions
  • Dark Matter and Cosmic Phenomena
  • Computational Physics and Python Applications
  • Distributed and Parallel Computing Systems
  • Neutrino Physics Research
  • Cosmology and Gravitation Theories
  • Advanced Data Storage Technologies
  • Scientific Computing and Data Management
  • Medical Imaging Techniques and Applications
  • Radiation Detection and Scintillator Technologies
  • Parallel Computing and Optimization Techniques
  • advanced mathematical theories
  • Astrophysics and Cosmic Phenomena
  • Cloud Computing and Resource Management
  • Black Holes and Theoretical Physics
  • Atomic and Subatomic Physics Research
  • Software System Performance and Reliability
  • Muon and positron interactions and applications
  • Big Data Technologies and Applications
  • Structural Analysis of Composite Materials
  • Distributed systems and fault tolerance
  • Statistical Distribution Estimation and Applications

European Organization for Nuclear Research
2012-2025

Northern Illinois University
2020-2024

A. Alikhanyan National Laboratory
2024

Institute of High Energy Physics
2024

Atlas Scientific (United States)
2024

The University of Adelaide
2017-2023

University of Geneva
2023

Brandeis University
2020

The University of Texas at Arlington
2017

Brookhaven National Laboratory
2014-2015

This paper presents a summary of beam-induced backgrounds observed in the ATLAS detector and discusses methods to tag remove background contaminated events data. Trigger-rate based monitoring beam-related is presented. The correlations with machine conditions, such as residual pressure beam-pipe, are discussed. Results from dedicated beam-background simulations shown, their qualitative agreement data evaluated. Data taken during passage unpaired, i.e. non-colliding, proton bunches used...

10.1088/1748-0221/8/07/p07004 article EN Journal of Instrumentation 2013-07-17

The computing strategy document for HL-LHC identifies storage as one of the main WLCG challenges in decade from now. In naive assumption applying today's model, ATLAS and CMS experiments will need order magnitude more resources than what could be realistically provided by funding agencies at same cost today. evolution facilities way organized consolidated play a key role how this possible shortage addressed. contribution we describe architecture data lake, intended service geographically...

10.1051/epjconf/201921404024 article EN cc-by EPJ Web of Conferences 2019-01-01

The Experiment Dashboard system provides common solutions for monitoring job processing, data transfers and site/service usability. Over the last seven years, it proved to play a crucial role in of LHC computing activities, distributed sites services. It has been one key elements during commissioning systems experiments.

10.1088/1742-6596/396/3/032093 article EN Journal of Physics Conference Series 2012-12-13

This paper covers in detail a variety of accounting tools used to monitor the utilisation available computational and storage resources within ATLAS Distributed Computing during first three years Large Hadron Collider data taking. The Experiment Dashboard provides set common that combine monitoring information originating from many different sources; either generic or specific. quality scalable solutions are flexible enough support constantly evolving requirements user community.

10.1088/1742-6596/513/6/062024 article EN Journal of Physics Conference Series 2014-06-11

Since several years the LHC experiments rely on WLCG Service Availability Monitoring framework (SAM) to run functional tests their distributed computing systems. The SAM have become an essential tool measure reliability of Grid infrastructure and ensure reliable operations, both for sites experiments. Recently old was replaced with a completely new system based Nagios ActiveMQ better support transition EGI its more model implement scalability functionality enhancements. This required all...

10.1088/1742-6596/396/3/032100 article EN Journal of Physics Conference Series 2012-12-13

Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period consolidation, characterized by building upon previously established systems, with aim reducing operational effort, improving robustness, and reaching higher scale. This paper describes current state computing. Cloud activities are converging on common contextualization approach for virtual machines, resources sharing monitoring service discovery components. We describe integration Vacuum resources,...

10.1088/1742-6596/898/5/052008 article EN Journal of Physics Conference Series 2017-10-01

Based on the observation of low average CPU utilisation several hundred file storage servers in EOS system at CERN, Batch Extra Resources (BEER) project developed an approach to also utilise these resources for batch processing. Initial proof concept tests showed little interference between and services a node. Subsequently model production was implemented. This has been deployed part CERN service. The implementation test results will be presented. potential additional Tier-0 centre is order...

10.1051/epjconf/201921408025 article EN cc-by EPJ Web of Conferences 2019-01-01

In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed resources. The current systems have proven to be mature capable meeting experiment goals, by allowing timely delivery results. However, substantial amount interventions from software developers, shifters operational teams is needed efficiently manage such heterogeneous infrastructures. A wealth data can...

10.1051/epjconf/202024503017 article EN cc-by EPJ Web of Conferences 2020-01-01

HammerCloud is a testing service and framework to commission, run continuous tests or on-demand large-scale stress tests, benchmark computing resources components of various distributed systems with realistic full-chain experiment workflows. HammerCloud, used by the ATLAS CMS experiments in production, has been useful commission both compute complex LHC experiments, as well integral partof monitoring suite that essential for operations their automation. In this contribution we review recent...

10.1051/epjconf/201921403033 article EN cc-by EPJ Web of Conferences 2019-01-01

The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where 2 million jobs are being executed daily and petabytes of data transferred between sites. Monitoring the activities experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms computation, performance reliability. Furthermore, generated monitoring flow constantly increasing, which represents another challenge for systems. While existing solutions traditionally based on...

10.1088/1742-6596/513/3/032048 article EN Journal of Physics Conference Series 2014-06-11

The automation of operations is essential to reduce manpower costs and improve the reliability system. Site Status Board (SSB) a framework which allows Virtual Organizations monitor their computing activities at distributed sites evaluate site performance. ATLAS experiment intensively uses SSB for shifts, estimating data processing transfer efficiencies particular site, implementing automatic exclusion from activities, in case potential problems. provides real-time aggregated monitoring view...

10.1088/1742-6596/396/3/032072 article EN Journal of Physics Conference Series 2012-12-13

In this document we summarize the experience acquired in execution of Phase 1. The objective 1 is “[...] to achieve an informal technical evaluation Azure and Batch understand technology, APIs, ease-of-use, any obvious barrier integration with existing WLCG tools processes. [...] resulting feedback will give both parties a chance assess basic feasibility for more advanced activities” [1].

10.5281/zenodo.48495 article EN 2016-03-29

The Simulation at Point1 project is successfully running standard ATLAS simulation jobs on the TDAQ HLT resources. pool of available resources changes dynamically, therefore we need to be very effective in exploiting computing cycles. We present our experience with using Event Service that provides event-level granularity computations. show design decisions and overhead time related usage Service. improved utilization also presented recent development monitoring, automatic alerting, deployment GUI.

10.1088/1742-6596/898/8/082012 article EN Journal of Physics Conference Series 2017-10-01

The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability system. In this perspective a crucial case automatic handling outages computing sites storage resources, are continuously exploited at edge their capabilities. It challenging adopt unambiguous decision criteria for resources non-homogeneous types, sizes roles. recently developed Storage Area Automatic Blacklisting (SAAB)...

10.1088/1742-6596/513/3/032098 article EN Journal of Physics Conference Series 2014-06-11

The Simulation at Point1 (Sim@P1) project was built in 2013 to take advantage of the ATLAS Trigger and Data Acquisition High Level (HLT) farm. HLT farm provides around 100,000 cores, which are critical during data taking. When is not recording data, such as long shutdowns LHC, this large compute resource used generate process simulation for experiment. At beginning second shutdown hadron collider, including Sim@P1 infrastructure upgraded. Previous papers emphasised need simple, reliable,...

10.1051/epjconf/202024507044 article EN cc-by EPJ Web of Conferences 2020-01-01

After two years of LHC data taking, processing and analysis with numerous changes in computing technology, a number aspects the experiments' computing, as well WLCG deployment operations, need to evolve. As part activities Experiment Support group CERN's IT department, reinforced by effort from EGI-InSPIRE project, we present work aimed at common solutions across all experiments. Such allow us not only optimize development manpower but also offer lower long-term maintenance support costs....

10.1088/1742-6596/396/3/032048 article EN Journal of Physics Conference Series 2012-12-13

As a joint effort from various communities involved in the Worldwide LHC Computing Grid, Operational Intelligence project aims at increasing level of automation computing operations and reducing human interventions. The distributed systems currently deployed by experiments have proven to be mature capable meeting experimental goals, allowing timely delivery scientific results. However, substantial number interventions software developers, shifters, operational teams is needed efficiently...

10.3389/fdata.2021.753409 article EN cc-by Frontiers in Big Data 2022-01-07
Coming Soon ...