- Advanced Data Storage Technologies
- Parallel Computing and Optimization Techniques
- Distributed and Parallel Computing Systems
- Cloud Computing and Resource Management
- Distributed systems and fault tolerance
- Interconnection Networks and Systems
- Software System Performance and Reliability
- Caching and Content Delivery
- Service-Oriented Architecture and Web Services
- Scientific Computing and Data Management
- Business Process Modeling and Analysis
- Cloud Data Security Solutions
- Peer-to-Peer Network Technologies
- Geophysics and Gravity Measurements
- Astronomy and Astrophysical Research
- Software-Defined Networks and 5G
- Big Data Technologies and Applications
- Auction Theory and Applications
- Optimization and Search Problems
- Embedded Systems Design Techniques
- Algorithms and Data Compression
- Radio Astronomy Observations and Technology
- Software Testing and Debugging Techniques
- ERP Systems Implementation and Impact
- Evolutionary Algorithms and Applications
Foundation for Research and Technology Hellas
2014-2025
Trieste Astronomical Observatory
2024
FORTH Institute of Electronic Structure and Laser
1998-2023
FORTH Institute of Computer Science
1997-2022
Czech Academy of Sciences, Institute of Computer Science
2020
University of Crete
1995-2003
Crete University Press
1997
Flash-based solid state drives (SSDs) offer superior performance over hard disks for many workloads. A prominent use of SSDs in modern storage systems is to these devices as a cache the I/O path. In this work, we examine how transparent, online compression can be used increase capacity SSD-based caches, thus increasing costeffectiveness system. We present FlaZ, an system that operates at block-level and transparent existing file-systems. To achieve path maintain high performance, provides...
We present and evaluate the ExaNeSt Prototype, which compactly packages 128 Xilinx ZU9EG MPSoCs, 2 TBytes of DRAM, 8 SSD into a liquid-cooled rack, using custom interconnection hardware based on 10 Gbps links. developed this testbed in 2016-2019 order to leverage flexibility FPGAs for experimenting with efficient support HPC communication among tens thousands processors accelerators quest towards Exascale systems beyond. In years since then, we carefully studied system, our key design...
EUROSERVER is a collaborative project that aims to dramatically improve data centre energy-efficiency, cost, and software efficiency. It addressing these important challenges through the coordinated application of several key recent innovations: 64-bit ARM cores, 3D heterogeneous silicon-on-silicon integration, fully-depleted silicon-on-insulator (FD SOI) process technology, together with new techniques for efficient resource management, including sharing workload isolation. We are...
ExaNest is one of three European projects that support a ground-breaking computing architecture for exascale-class systems built upon power-efficient 64-bit ARM processors. This group share an "everything-close" and "share-anything" paradigm, which trims down the power consumption -- by shortening distance signals most data transfers as well cost footprint area installation reducing number devices needed to meet performance targets. In ExaNeSt, we will design implement: (i) physical rack...
Power consumption and high compute density are the key factors to be considered when building a node for upcoming Exascale revolution. Current architectural design manufacturing technologies not able provide requested level of power efficiency realise an operational machine. A disruptive change in hardware integration process is needed order cope with requirements this forthcoming computing target. This paper presents ExaNoDe H2020 research project aiming highly energy efficient integrated...
A Smart City based on data acquisition, handling and intelligent analysis requires efficient design implementation of the respective AI technologies underlying infrastructure for seamlessly analyzing large amounts in real-time. The EU project MARVEL will research solutions that can improve integration multiple sources a environment harnessing advantages rooted multimodal perception surrounding environment.
Flash-based solid state drives (SSDs) exhibit potential for solving I/O bottlenecks by offering superior performance over hard disks several workloads. In this work we design Azor, an SSD-based cache that operates at the block-level and is transparent to existing applications, such as databases. Our provides various choices associativity, write policies line size, while maintaining a high degree of concurrency. main contribution explore differentiation HDD blocks according their expected...
This paper provides a snapshot summary of the trends in area micro-server development and their application broader enterprise cloud markets. Focusing on technology aspects, we provide an understanding these specifically differentiation uniqueness approach being adopted by EUROSERVER FP7 project. The unique technical contributions range from fundamental system compute unit design architecture, through to implementation both at chiplet nanotechnological integration, everything-close physical...
Volunteer computing is becoming a new paradigm not only for the computational grid, but also institutions using production-level data grids because of enormous storage potential that may be achieved at low cost by commodity hardware within their own premises. However, this novel "Desktop Data Grid" depends on set widely distributed and untrusted nodes, therefore offering no guarantees about neither availability nor protection to stored data. These security challenges must carefully managed...
Efficient prototyping of a large complex system can be significantly facilitated by the use flexible and versatile physical platform where both new hardware software components readily implemented tightly integrated in timely manner. Towards this end, we have developed 120 130 mm QFDB board associated firmware, including environment. We based on advanced dense modular building block. The features 4 interconnected Xilinx Zynq Ultrascale+ devices, each one consisting an ARM-based subsystem...
A primary concern for cloud operators is to increase resource utilization while maintaining good performance applications. This particularly difficult achieve three reasons: users tend overprovision applications, applications are diverse and dynamic, their depends on multiple resources. In this paper, we present Skynet, an automated adaptive management approach that addresses all concerns. Skynet uses level objectives (PLOs) capture user intentions about required more accurately remove the...
Building scalable back-end infrastructures for data-centric applications is becoming important. Applications used in data-centres have complex, multilayer software stacks and are required to scale a large number of nodes. Today, there increased interest improving the efficiency such stacks. In this paper, we examine stack distributed stream processing, an important application domain. We use specific streaming system, Borealis [10], extensively hand-tune end-to-end data path. focus on parts...
Server consolidation via virtualization is an essential technique for improving infrastructure cost in modern datacenters. From the viewpoint of datacenter operators, offers compelling advantages by reducing number physical servers, and operational costs such as energy consumption. However, performance interference between co-located workloads can be crippling. Conservatively, at significant cost, operators are forced to keep servers low utilization levels (typically below 20%), minimize...
Internet of Things (IoT) is an emerging field characterized by constrained resources, Internet-based communication, arbitrary topologies, geographical distance, and variable operational conditions. Additionally, IoT architectures typically exhibit at least three tiers: devices, Edge gateways, Cloud servers. On top challenging the design networked systems, multiple tiers create a web complexity that makes systems evaluation endeavor. This paper presents framework for transforming cluster lab...
This paper demonstrates the combined use of three simulation tools in support a co-design methodology for an HPC-focused System-on-a-Chip (SoC) design. The make different trade-offs between speed, accuracy and model abstraction level, are shown to be complementary. We apply MUSA trace-based simulator initial sizing vector register length, system-level cache (SLC) size memory bandwidth. It has proven very efficient at pruning design space, as its models enable sufficient without having resort...
In this work we examine how transparent compression in the I/O path can improve space efficiency for online storage. We extend block layer with ability to compress and decompress data as they flow between file-system disk. Achieving requires extensive metadata management dealing variable sizes, dynamic mapping, allocation, explicit scheduling optimizations mitigate impact of additional sand overheads. Preliminary results show that is a viable option improving effective storage capacity, it...
System software overheads in the I/O path, including VFS and file system code, become more pronounced with emerging low-latency storage devices. Currently, these constitute main bottleneck path they limit efficiency of modern systems. In this paper we present a taxonomy current state-of-the-art systems on accelerating accesses to fast Furthermore, Iris, new for applications, that minimizes from common path. The idea is separation control data planes. plane consists an unmodified Linux kernel...
Achieving high I/O throughput on modern servers presents significant challenges. With increasing core counts, server memory architectures become less uniform, both in terms of latency as well bandwidth. In particular, the bandwidth interconnect among NUMA nodes is limited compared to local Moreover, congestion and contention introduce additional remote accesses. These challenges severely limit maximum achievable storage IOPS rate. Therefore, data thread placement are critical for...