E. Ryabinkin

ORCID: 0009-0006-8982-9510
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • High-Energy Particle Collisions Research
  • Particle physics theoretical and experimental studies
  • Quantum Chromodynamics and Particle Interactions
  • Particle Detector Development and Performance
  • Nuclear reactor physics and engineering
  • Distributed and Parallel Computing Systems
  • Dark Matter and Cosmic Phenomena
  • Scientific Computing and Data Management
  • Advanced Data Storage Technologies
  • Cosmology and Gravitation Theories
  • Statistical Methods and Bayesian Inference
  • Pulsars and Gravitational Waves Research
  • Cloud Computing and Resource Management
  • Nuclear physics research studies
  • Computational Physics and Python Applications
  • Aquatic and Environmental Studies
  • Advanced Database Systems and Queries
  • Superconducting Materials and Applications
  • Fluid Dynamics Simulations and Interactions
  • Scientific Research and Philosophical Inquiry
  • advanced mathematical theories
  • Mathematical Control Systems and Analysis
  • Nonlinear Dynamics and Pattern Formation
  • Ocean Waves and Remote Sensing
  • Advanced Numerical Methods in Computational Mathematics

European Organization for Nuclear Research
2012-2025

Hospital Universitário da Universidade de São Paulo
2025

A. Alikhanyan National Laboratory
2015-2024

University of Bergen
2015-2024

University of Houston
2023-2024

Technical University of Košice
2024

Sungkyunkwan University
2024

Institute of Nuclear Physics, Polish Academy of Sciences
2023-2024

Suranaree University of Technology
2023-2024

IMT Atlantique
2024

The review of the distributed grid computing infrastructure for LHC experiments in Russia is given. emphasis placed on Tier-1 site construction at National Research Centre "Kurchatov Institute" (Moscow) and Joint Institute Nuclear (Dubna).

10.1088/1742-6596/513/6/062041 article EN Journal of Physics Conference Series 2014-06-11

Simulation of the water flow around a ship hull and marine propeller operation are considered in this paper as popular problems propulsion, which frequently investigated through CFD approach now. technologies used for determination resistance well open curves according to usual methods design. FlowVision software is simulations based on solving RANS equations. The was together with supercomputer "HPC 2" National Research Center "Kurchatov Institute". original features numerical models...

10.1109/ispras.2017.00027 article EN 2017-11-01

The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale complexity of LHC distributed computing for ATLAS experiment. currently distributes jobs among more than 100,000 cores at well over 120 Grid sites, supercomputing centers, commercial academic clouds. physicists submit 1.5 M data processing, simulation analysis per day, keeps all meta-information about job submissions execution events in Oracle RDBMS. above information is used...

10.1016/j.procs.2015.11.051 article EN Procedia Computer Science 2015-01-01

In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe and workflows. These are used to obtain information about current system state for statistical trend analysis processes these systems drive. Over time amount stored can grow dramatically. this article we present our studies demonstrate how storage scalability performance be improved by using hybrid RDBMS/NoSQL architecture.

10.1088/1742-6596/664/4/042023 article EN Journal of Physics Conference Series 2015-12-23

Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle experiment is managed by specialized software like Distributed Data Management Workload Systems. In order to be interpreted mined, experimental must accompanied auxiliary metadata, which recorded at each processing step. Metadata describes represent objects or results experiments, allowing them shared various applications,...

10.1088/1742-6596/762/1/012017 article EN Journal of Physics Conference Series 2016-10-01

Rapid increase of data volume from the experiments running at Large Hadron Collider (LHC) prompted physics computing community to evaluate new handling and processing solutions. Russian grid sites universities' clusters scattered over a large area aim task uniting their resources for future productive work, same time giving an opportunity support collaborations. In our project we address fundamental problem designing architecture integrate distributed storage LHC other data-intensive science...

10.1088/1742-6596/898/6/062016 article EN Journal of Physics Conference Series 2017-10-01

Paper presents some results of the implementation a quasi-hydrodynamic (QHD) approach as finite volume method (FVM) solver mulesQHDFoam on basis OpenFOAM. Application QHD numerical algorithm to simulation attractor internal gravity waves is considered. A comparison FVM with spectral element (SEM) implemented in Nek5000 given. Convergence model SEM shown. The Big Data analysis (Proper Orthogonal Decomposition) used tool for comparing calculation between QHDFoam and Nek5000.

10.1109/ispras.2018.00027 article EN 2018-11-01

The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher energies from April 2015 (LHC Run2). need simulation, data processing analysis would overwhelm expected capacity of grid infrastructure computing facilities deployed Worldwide Computing Grid (WLCG). To meet this challenge integration opportunistic resources into model is highly important. Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow a part WLCG it...

10.1088/1742-6596/664/9/092018 article EN Journal of Physics Conference Series 2015-12-23

There is pronounced interest to cloud computing in the scientific community. However, current offerings are rarely suitable for highperformance computing, large part due an overhead level of underlying virtualization components. The purpose this paper propose a design and implementation system that possesses small enough allow it be practically used wide range workloads. First, we describe requirements desired classify workloads identify those practical transfer cloud. Then, review related...

10.15514/ispras-2013-24-1 article EN cc-by Proceedings of the Institute for System Programming of RAS 2013-01-01

Creation of global e-Infrastructure involves an integration isolated local resources into common heterogeneous computing environment. In 2014 a pioneering work to develop large scale data- and task- management system for federated has been started at the National Research Centre "Kurchatov Institute" (NRC KI, Moscow). As part this work, we have designed, developed deployed portal submit payloads infrastructure. It combines Tier-1, Cloud-infrastructure, supercomputer Kurchatov institute. This...

10.1016/j.procs.2015.11.064 article EN Procedia Computer Science 2015-01-01

The major subject of this paper is the presentation distributed computing status report for ALICE experiment at Russian sites just before data taking Large Hadron Collider in CERN. We present usage application software, AliEn[1], top modern EGEE middleware called gLite simulation and analysis Tier2 accordance with model [2]. outline results CPU disk space RDIG first LHC from exposition detector.

10.1088/1742-6596/219/7/072054 article EN Journal of Physics Conference Series 2010-04-01

The computationally intensive applications (HPC) and the data analytics within Big Data (BDA) constitute a combined workflow for scientific discovery. Yet, development execution environment HPC BDA have different origins also single discipline there are large differences requirements provisioning of libraries tools Linux Operating System. Traditionally library versioning is addressed with software Modules other multi-root building frameworks by system administration. This does not necessary...

10.1109/monetec.2018.8572238 article EN 2018-10-01

Creating an effective pump that is able to maintain blood circulation in a heart with appropriate medical indications undoubtedly crucial task. First versions of such devices are currently being created and tested on the basis IFPM SB RAS, Novosibirsk. This work devoted numerical simulations order optimize parameters. Equations viscous incompressible fluid flow used for this purpose. Implemented algorithm based regularization initial equations, which applies finite volume method avoids...

10.1109/ispras47671.2019.00026 article EN 2019-12-01

A Virtual Basin project and some features of its implementation for High Performance Computing (HPC) are presented in this paper. There many attempts to create virtual basin approach ship hydrodynamic simulations over the world, current is well aligned with activity. The main described paper contrast analogues are: firstly, developing a simple tool designers without deep expertise numerical methods, and, secondly, implementing access remote HPC resources. problems on resources shown example...

10.1016/j.procs.2015.11.016 article EN Procedia Computer Science 2015-01-01

The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of largest collaborations ever assembled history science, forefront research LHC. To address an unprecedented multi-petabyte data processing challenge, ATLAS experiment relying on a heterogeneous distributed computational infrastructure. manage workflow for all hundreds centers PanDA (Production and Distributed...

10.1051/epjconf/201610801003 article EN cc-by EPJ Web of Conferences 2016-01-01

Представлен обзор распределенной вычислительной инфраструктуры ресурсных центров коллаборации WLCG для экспериментов БАК. Особое внимание уделено описанию решаемых задач и основным сервисам нового ресурсного центра уровня Tier-1, созданного в Национальном исследовательском центре «Курчатовский институт» обслуживания ALICE, ATLAS LHCb (г. Москва).

10.20537/2076-7633-2015-7-3-621-630 article RU cc-by-nd Computer Research and Modeling 2015-06-01

On the threshold of LHC data there were intensive test and upgrade GRID application software for all experiments at top modern LCG middleware (gLite). The update such ALICE experiment LHC, AliEn[1] had provided stable secure operation sites developing data. activity Russian RDIG (Russian Data Intensive GRID) computer federation which is distributed Tier-2 centre are devoted to simulation analysis in accordance with computing model [2]. Eight this interesting their middle ware requirements...

10.1088/1742-6596/331/7/072066 article EN Journal of Physics Conference Series 2011-12-23

The paper describes a high-performance computer network optimized for the transmission of experimental data generated by CERN potential use Large Hadron Collider (LHC) community, including Russian segment this network. Worldwide LHC Computing Grid (WLCG) is global computing infrastructure whose mission to provide resources storing, distributing and analyzing at CERN, making equally accessible all participants, regardless their physical location. WLCG multi-layer distributed infrastructure....

10.1109/monetec55448.2022.9960772 article EN 2022-10-27
Coming Soon ...