- Distributed and Parallel Computing Systems
- Cloud Computing and Resource Management
- Scientific Computing and Data Management
- IoT and Edge/Fog Computing
- Advanced Data Storage Technologies
- Blockchain Technology Applications and Security
- Peer-to-Peer Network Technologies
- Software System Performance and Reliability
- Parallel Computing and Optimization Techniques
- Caching and Content Delivery
- Data Stream Mining Techniques
- Computer Graphics and Visualization Techniques
- Distributed systems and fault tolerance
- Research Data Management Practices
- Big Data and Business Intelligence
- Atmospheric and Environmental Gas Dynamics
- 3D Shape Modeling and Analysis
- Islanding Detection in Power Systems
- Time Series Analysis and Forecasting
- Advanced Data Processing Techniques
- Neural Networks and Applications
- Real-time simulation and control systems
- Educational Technology in Learning
- Cyclone Separators and Fluid Dynamics
- Online Learning and Analytics
Instituto de Instrumentación para Imagen Molecular
2012-2023
Universitat Politècnica de València
2013-2023
Parc Científic de la Universitat de València
2017
Universitat Politècnica de Catalunya
2016
Eli Lilly (United States)
1992
In this paper we propose a distributed architecture to provide machine learning practitioners with set of tools and cloud services that cover the whole development cycle: ranging from models creation, training, validation testing serving as service, sharing publication. such respect, DEEP-Hybrid-DataCloud framework allows transparent access existing e-Infrastructures, effectively exploiting resources for most compute-intensive tasks coming cycle. Moreover, it provides scientists...
This paper addresses the impact of vertical elasticity for applications with dynamic memory requirements when running on a virtualized environment. Vertical is ability to scale up and down capabilities Virtual Machine (VM). In particular, we focus management automatically fit at runtime underlying computing in- frastructure application, thus adapting size VM consumption pattern application. An architecture described, together proof-of-concept implementation, that dynamically adapts prevent...
The advent of open-source serverless computing frameworks has introduced the ability to bring Functions-as-a-Service (FaaS) paradigm for applications be executed on-premises. In particular, data-driven scientific can benefit from these with trigger scalable computation in response incoming workloads files processed. This paper introduces an framework achieve on-premises event-driven data processing that features: i) automated provisioning elastic Kubernetes cluster grow and shrink, terms...
This paper presents a general energy management system for HPC clusters and cloud infrastructures that powers off cluster nodes when they are not being used, conversely them on needed. can be integrated with different middleware, such as Batch-Queuing Systems or Cloud Management Systems, by using set of connectors, is also able to deal mechanisms powering the computing (such Wake-on-Lan, Power Device Units, Intelligent Platform Interface other infrastructure-specific mechanisms). While some...
With the advent of cloud technologies scientists have access to different infrastructures in order deploy all virtual machines they need perform computations required their research works. This paper describes a software architecture and description language simplify creation needed resources, elastic evolution computing infrastructure depending on application requirements some QoS features.
Serverless computing has introduced unprecedented levels of scalability and parallelism for the execution High Throughput Computing tasks. This represents a challenge an opportunity different scientific workloads to be adapted upcoming programming models that simplify usage such platforms. In this paper we introduce serverless model highly-parallel file-processing applications. We also describe middleware implementation supports customized environments based on Docker images AWS Lambda,...
This paper proposes an auto-profiling tool for OSCAR, open-source platform able to support serverless computing in cloud and edge environments. The tool, named OSCAR-P, is designed automatically test a specified application workflow on different hardware node combinations, obtaining relevant information the execution time of individual components. It then uses collected data build performance models using machine learning, making it possible predict unseen configurations. preliminary...
Serverless computing was a breakthrough in Cloud due to its high elasticity capabilities and fine-grained pay-per-use model offered by the main public providers. Meanwhile, open-source serverless platforms supporting FaaS (Function as Service) allow users take advantage of many their benefits while operating on on-premises organizations. This opens possibility deploy exploit them different layers cloud-to-edge continuum, either IoT (Internet Things) devices located at Edge (i.e. next data...
Summary This paper presents a software platform to dynamically deploy complex scientific virtual computing infrastructures, on top of Infrastructure as Service Clouds. The orchestrates different services provision the resources. It installs appropriate satisfy requirements researcher, both public and on‐premise provides web interface enable users easily manage life cycle infrastructures. enables define share them with other users, relinquish them, add or remove resources dynamically, create...
There is an opportunity for Distributed Computing Infrastructures (DCIs) to embrace container-based virtualisation support efficient execution of scientific applications without the performance penalty commonly introduced by Virtual Machines (VMs). However, containers (e.g. Docker) and VMs feature different image formats disparate procedures deployment management, thus hindering adoption hybrid DCIs (HDCIs) comprised those kind resources. This paper describes a workflow based on open-source...
A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context EU H2020 INDIGO-DataCloud project. Such require availability large amount (multi-terabyte order) related to output simulations as well exploitation scientific management tools for large-scale analytics. More specifically, paper discusses detail a use precipitation trend terms requirements, architectural design solution, and infrastructural...