- Cloud Computing and Resource Management
- Parallel Computing and Optimization Techniques
- Interconnection Networks and Systems
- Advanced Data Storage Technologies
- Distributed and Parallel Computing Systems
- Embedded Systems Design Techniques
- Caching and Content Delivery
- IoT and Edge/Fog Computing
- Stochastic Gradient Optimization Techniques
- Radiation Effects in Electronics
- Low-power high-performance VLSI design
- Software-Defined Networks and 5G
- Distributed systems and fault tolerance
- Advanced Memory and Neural Computing
University of Thessaly
2016-2022
IBM Research - Ireland
2020-2021
With cloud providers constantly seeking the best infrastructure trade-off between performance delivered to customers and overall energy/utilization efficiency of their data-centres, hardware disaggregation comes in as a new paradigm for dynamically adapting data-centre characteristics running workloads. Such an adaptation enables unprecedented level both from standpoint energy utilization system resources. In this paper, we present - ThymesisFlow first, our knowledge, full-stack prototype...
CPUs typically operate at a voltage which is higher than what strictly required, using margins to account for process variability and anticipate any combination of adverse operating conditions. However, these worst-case scenarios occur rarely, if ever, thus the overly pessimistic resulting in excessive power dissipation leads decreased performance under capping. In this paper, we investigate impact reducing beyond nominal level on efficiency CPU capping mechanisms, three commercial systems,...
The explosive growth of Internet-connected devices will soon result in a flood generated data, which increase the demand for network bandwidth as well compute power to process data. Consequently, there is need more energy efficient servers empower traditional centralized Cloud data-centers emerging decentralized at Edges Cloud. In this paper, we present our approach, aims developing new class micro-servers - UniServer that exceed conservative and performance scaling boundaries by introducing...
To improve power efficiency, researchers are experimenting with dynamically adjusting the voltage and frequency margins of systems to just above minimum required for reliable operation. Traditionally, manufacturers did not allow reducing these margins. Consequently, existing studies use system simulators, or software fault-injection methodologies, which slow, inaccurate cannot be applied on realistic workloads. However recent CPUs operation outside nominal voltage/frequency envelope. We...
Energy efficiency is a major concern for cloud computing, with CPUs accounting significant fraction of datacenter nodes power consumption. CPU manufacturers introduce voltage margins to guarantee correct operation. However, these are unnecessarily wide real-world execution scenarios, and translate increased In this paper, we investigate how such can be exploited by infrastructure operators, selectively undervolting nodes, at the controlled risk inducing failures activating service-level...
Chip manufacturers introduce redundancy at various levels of CPU design to guarantee correct operation, even for worst-case combinations non-idealities in process variation and system operating conditions. This is implemented partly the form voltage margins. However, a wide range real-world execution scenarios these margins are excessive merely translate increased power consumption, hindering effort towards higher-energy efficiency both HPC general purpose computing. Our study on x86-64...
Modern cloud computing workloads are becoming day by more demanding, in terms of computational resources, as they feature multiple complex components, utilize heterogeneous hardware, and require tremendous amounts memory. Such attributes the emerging disrupt traditional design infrastructures, which bound to decisions at time infrastructure, mandate dynamic composability next-generation infrastructures.Scaling beyond physical boundaries server trays, minimizing over-provisioning nodes...
Cloud providers offer a variety of storage solutions for hosting data, both in price and performance. For Analytics machine learning applications, object services are the go-to solution datasets that exceed tens gigabytes size. However, such choice results performance degradation these applications requires extra engineering effort form code changes to access data on remote storage. In this paper, we present generic end-to-end offers seamless services, transparent caching within compute...