- Cloud Computing and Resource Management
- Distributed and Parallel Computing Systems
- IoT and Edge/Fog Computing
- Graph Theory and Algorithms
- Real-time simulation and control systems
- Scientific Computing and Data Management
- Blockchain Technology Applications and Security
- Software System Performance and Reliability
- Advanced Data Storage Technologies
- Parallel Computing and Optimization Techniques
- Data Stream Mining Techniques
- Cloud Data Security Solutions
- Advanced Database Systems and Queries
- Embedded Systems and FPGA Design
- Data Management and Algorithms
- Stochastic Gradient Optimization Techniques
- Material Dynamics and Properties
- Smart Grid Energy Management
- Anomaly Detection Techniques and Applications
- Ionic liquids properties and applications
- Force Microscopy Techniques and Applications
- Advanced Software Engineering Methodologies
- Business Process Modeling and Analysis
University of Tartu
2012-2024
Emerging serverless computing technologies, such as function a service (FaaS), enable developers to virtualize the internal logic of an application, simplifying management cloud-native services and allowing cost savings through billing scaling at level individual functions. Serverless is therefore rapidly shifting attention software vendors challenge developing cloud applications deployable on FaaS platforms. In this vision paper, we present research agenda RADON project (...
In this study, we examined the thickness of electrical double layer (EDL) in ionic liquids using density functional theory (DFT) calculations and molecular dynamics (MD) simulations. We focused on BF4- anion adsorption from 1-ethyl-3-methylimidazolium tetrafluoroborate (EMImBF4) liquid Au(111) surface. At both DFT MD levels, evaluated capacitance-potential dependence for Helmholtz model interface. Using simulations, also explored a more realistic, multilayer EDL accounting ion layering....
Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects moving parallel applications onto cloud, we deployed several benchmark like matrix–vector operations and NAS benchmarks, DOUG (Domain decomposition On Unstructured Grids) on cloud. is an open source software package for iterative solution very large sparse systems linear equations. The detailed analysis cloud showed that benefit...
Scientific Computing deals with solving complex scientific problems by applying resource-hungry computer simulation and modeling tasks on-top of supercomputers, grids clusters. Typical computing applications can take months to create debug when de facto parallelization solutions like Message Passing Interface (MPI), in which the bulk details have be handled users. Frameworks based on MapReduce model, Hadoop, greatly simplify creating distributed handling most fault recovery automatically for...
Solving systems of linear algebraic equations (SLAE) is a problem often encountered in fields like engineering, physics, computer science and economics. As the number unknowns system grows, runtime memory requirement solving SLAE increases dramatically. To manage this, execution solver should be parallelizable performed distributed environments cloud. However, to fully take advantage cloud infrastructure, one adapt frameworks that can successfully exploit resources MapReduce framework, which...
Cloud is a promising source for computing resources when solving scientific problems. Several large companies have entered the cloud market in recent times and are providing services like Infrastructure as Service (IaaS) that provide virtual computer machines on demand. These can easily be used high performance needs by applying distributed methods frameworks. The SciCloud project at University of Tartu has studied adapting algorithms to different frameworks MapReduce its optimizations....
In recent years, cloud computing has emerged as an alternative to classical HPC resources like supercomputers, computer clusters and grids. As provides a convenient real-time access dynamic resources, it appears ideal for solving large scale scientific problems in domains physics, chemistry, astrophysics, climatology, etc. However, using number of means that applications must be able distribute the work between these fault tolerant manner commodity are bound fail regular intervals....
Scientific computing applications usually need huge amounts of computational power. The cloud provides interesting high-performance solutions, with its promise virtually infinite resources. However, migrating scientific problems to clouds and the re-creation a software environment on vendor-supplied OS instances is often laborious task. It also assumed that scientist who performing experiments has significant knowledge computer science, migration procedure. Most often, this not case....
Scientific computing is a field that applies computer science to solve scientific problems from domains like genetics, biology, material science, chemistry etc. It strongly associated with high performance (HPC) and parallel programming fields as typically utilizes large scale modeling simulation thus requires amounts of resources. Public clouds seem be very suitable for solving problems, but they are often built on commodity hardware it's not simple design applications can efficiently...
Cloud computing, with its promise of virtually limitless resources, seems to suit well in solving resource intensive problems from machine learning and data mining domains, by allowing scale any distributed or application little difficulty. However, be able run these applications on the cloud infrastructure, must reduced frameworks that can successfully exploit like Hadoop MapReduce. It offers both automatic parallelization fault tolerance commodity hardware. it is not trivial adapt complex...
The importance of fault tolerance for parallel computing is ever increasing. mean time between failures (MTBF) predicted to decrease significantly future highly systems. At the same time, current trend use commodity hardware reduce cost clusters puts pressure on users ensure their applications. Cloud-based resources are one environments where latter holds true. When it comes embarrassingly data-intensive algorithms, MapReduce has gone a long way in ensuring can easily utilize these without...
Scientific computing applications usually need huge amounts of computational power. The cloud provides interesting high-performance solutions, with its promise virtually infinite resources on demand. However, migrating scientific problems to clouds and the re-creation software environment vendor-supplied OS instances is often a laborious task. It also assumed that scientist who performing experiments has significant knowledge computer science, migration procedure, which not true. Considering...
The importance of fault tolerance for the parallel computing field is ever increasing, as mean time between failures predicted to decrease significantly future highly systems. current trend using commodity hardware reduce cost clusters forces users ensure that their applications are tolerant. When it comes embarrassingly data-intensive algorithms, MapReduce has gone a long way in simplifying creation such applications. However, this does not apply iterative communication-intensive algorithms...
Despite today's fast and rapid modeling deployment capabilities to meet customer requirements in an agile manner, testing is still of utmost importance avoid outages, unsatisfied customers, performance problems. To tackle such issues, (load) one several approaches. In this paper, we introduce the Continuous Testing Tool (CTT), which enables tests test infrastructures along with cloud system under test, as well deploying executing against a fully deployed automated manner. CTT employs OASIS...