- Distributed and Parallel Computing Systems
- Cloud Computing and Resource Management
- Estrogen and related hormone effects
- Scientific Computing and Data Management
- Steroid Chemistry and Biochemistry
- Parallel Computing and Optimization Techniques
- Advanced Data Storage Technologies
- Blockchain Technology Applications and Security
- Software-Defined Networks and 5G
- IoT and Edge/Fog Computing
- Natural product bioactivities and synthesis
- Distributed systems and fault tolerance
- Organic Chemistry Cycloaddition Reactions
- Research Data Management Practices
- Functional Brain Connectivity Studies
- Peer-to-Peer Network Technologies
- Reservoir Engineering and Simulation Methods
- Economic and Technological Systems Analysis
- Privacy-Preserving Technologies in Data
- Economic and Technological Developments in Russia
- Bioactive Compounds and Antitumor Agents
- Cryptography and Data Security
- Big Data Technologies and Applications
- Mitochondrial Function and Pathology
- Interconnection Networks and Systems
St Petersburg University
2015-2024
Plekhanov Russian University of Economics
2020-2021
Saint Peter's University
2016
RWTH Aachen University
2015
University of Amsterdam
2003-2014
Academic Medical Center
2012
Vrije Universiteit Amsterdam
2010
Amsterdam University of the Arts
2007
Peter the Great St. Petersburg Polytechnic University
2006-2007
Research Institute of Obstetrics and Gynecology named after D.O. Ott
1992-2005
Recent advances in Internet and grid technologies have greatly enhanced scientific experiments' life cycle. In addition to compute- data-intensive tasks, large-scale collaborations involving geographically distributed scientists e-infrastructure are now possible. Scientific workflows, which encode the logic of experiments, becoming valuable resources. Sharing these resources letting worldwide work together on one experiment is essential for promoting knowledge transfer speeding up...
The Grid‐based Virtual Laboratory AMsterdam (VLAM‐G), provides a science portal for distributed analysis in applied scientific research. It offers scientists remote experiment control, data management facilities and access to resources by providing cross‐institutional integration of information familiar environment. main goal is provide unique existing standards software packages. This paper describes the design prototype implementation VLAM‐G platform. In this testbed we several recent...
Patients with mild cognitive impairment (MCI) do not always convert to dementia. In such cases, abnormal neuropsychological test results may validly reflect symptoms due brain disease, and the usual brain-behavior relationships be absent. This study examined symptom validity in a memory clinic sample its effect on associations between hippocampal volume performance. Eleven of 170 consecutive patients (6.5%; 13% younger than 65 years) referred clinics showed noncredible performance tests...
Blockchain is a developing technology that can provide users with such advantages as decentralization, data security, and transparency of transactions. has many applications, one them the decentralized finance (DeFi) industry. DeFi huge aggregator various financial blockchain protocols. At moment, total value locked in these protocols reaches USD 82 billion. Every day more new come to their investments. The concept involves creation single ecosystem blockchains interact each other. problem...
Large scale scientific applications require extensive support from middleware and frameworks that provide the capabilities for distributed execution in Grid environment. In particular, one of examples such is a Grid-enabled workflow management system. this paper we present WS-VLAM system, describe its current design developments targeting to efficient scalable large on Grid.
Virtual private supercomputer is an efficient way of conducting experiments on high-performance computational environment and the main role in this approach played by virtualization data consolidation. During experiment used to abstract application from underlying hardware also operating system offering consistent API for distributed computations. In between consolidation store initial results a storage offers processing. Combined, these APIs form solid basis shifting user focus...
Grid brings the power of many computers to scientists. However, development Grid‐enabled applications requires knowledge about infrastructure and low‐level API services. In turn, workflow management systems provide a high‐level environment for rapid prototyping experimental computing systems. Coupling paradigms is important scientific community: it makes easily available end user. The paradigm data driven execution one ways enable distributed on Grid. work presented in this paper carried out...
This paper presents a hybrid resource management environment, operating on both application and system levels developed for minimizing the execution time of parallel applications with divisible workload heterogeneous grid resources. The is based adaptive balancing algorithm (AWLB) incorporated into distributed analysis environment (DIANE) user-level scheduling (ULS) environment. AWLB ensures optimal distribution discovered requirements measured parameters. ULS maintains pool, enables...
Science gateways often rely on workflow engines to execute applications distributed infrastructures. We investigate six software architectures commonly used integrate into science gateways. In tight integration, the engine shares components with gateway. service invocation, is isolated and invoked through a specific interface. task encapsulation, wrapped as computing executed infrastructure. pool model, bundled in an agent that connects central fetch workflows. nested workflows, integrated...
Workflow management has been widely adopted by scientific communities as a valuable tool to carry out complex experiments. It allows for the possibility perform computations data analysis and simulations, whereas hiding details of infrastructures underneath. There are many workflow systems that offer large variety generic services coordinate execution workflows. Nowadays, there is trend extend functionality cover all possible requirements may arise from user community. However, multiple...
Neuroimaging is a field that benefits from distributed computing infrastructures (DCIs) to perform data processing and analysis, which often achieved using grid workflow systems. Collaborative research in neuroimaging requires ways facilitate exchange between different groups, particular enable sharing, re-use interoperability of applications implemented as workflows. The SHIWA project provides solutions sharing workflows systems DCI resources. In this paper we present analyse how the...
Neuroimaging is a field that benefits from distributed computing infrastructures (DCIs) to perform data- and compute-intensive processing analysis. Using grid workflow systems not only automates the pipelines, but also enables domain researchers implement their expertise on how best process neuroimaging data. To share this promote collaborative research in neurosciences, ways facilitate exchange, re-use, interoperability of applications between different groups are required. The SHIWA...