- Parallel Computing and Optimization Techniques
- Distributed systems and fault tolerance
- Advanced Data Storage Technologies
- Security and Verification in Computing
- Cloud Computing and Resource Management
- Distributed and Parallel Computing Systems
- Network Security and Intrusion Detection
- Interconnection Networks and Systems
- Real-Time Systems Scheduling
- Software-Defined Networks and 5G
- Advanced Malware Detection Techniques
- Advanced Data Processing Techniques
- Robotics and Automated Systems
Kitware (United States)
2021
ETH Zurich
2012-2016
Hardware disaggregation has emerged as one of the most fundamental shifts in how we build computer systems over past decades. While been successful for several types resources (storage, power, and others), memory yet to happen. We make case that time arrived. look at stories learn their success depended on two requirements: addressing a burning issue being technically feasible. examine through this lens find both requirements are finally met. Once available, will require software support be...
We present Barrelfish/DC, an extension to the Barrelfish OS which decouples physical cores from a native kernel, and furthermore kernel itself rest of application state. In code on any core can be quickly replaced, state moved between cores, added removed system transparently applications processes, continue execute.Barrelfish/DC is multikernel with two novel ideas: use boot drivers abstract as regular devices, partitioned capability for memory management externalizes core-local state.We...
Memory-centric computing demands careful organization of the virtual address space, but traditional methods for doing so are inflexible and inefficient. If an application wishes to larger physical memory than bits allow, if it maintain pointer-based data structures beyond process lifetimes, or share large amounts across simultaneously executing processes, legacy interfaces managing space cumbersome often incur excessive overheads. We propose a new operating system design that promotes spaces...
Many modern workloads are a heterogeneous mix of parallel applications running on machines with complex processors and memory systems. These often interfere each other by contending for the limited set resources in machine, leading to performance degradation.
For decades, database engines have found the generic interfaces offered by operating systems at odds with need for efficient utilization of hardware resources. As a result, most circumvent OS and manage directly. With growing complexity heterogeneity modern hardware, are now facing steep increase in they must absorb to achieve good performance. Taking advantage recent proposals system design, such as multi-kernels, this paper we explore development light weight kernel tailored data...
Building persistent memory (PM) data structures is difficult because crashes interrupt operations, leaving in an inconsistent state. Solving this requires augmenting code that modifies PM state to ensure interrupted operations can be completed or undone. Today, done using careful, hand-crafted code, a compiler pass, page faults. We propose new, easy way transform volatile structure work with uses cache-coherent accelerator do augmentation, and we show it may outperform existing approaches...
In a multi-CPU server, memory modules are local to the CPU which they connected, forming nonuniform access (NUMA) architecture. Because non-local accesses slower than accesses, NUMA architecture might degrade application performance. Similar slowdowns occur when an I/O device issues DMA (NUDMA) operations, as is connected via single CPU. NUDMA effects therefore performance similarly effects.
Verified systems software has generally had to assume the correctness of operating system and its provided services (like networking file system). Even though there exist verified systems, specifications for these components do not compose with applications produce a fully high-performance stack.
Memory-centric computing demands careful organization of the virtual address space, but traditional methods for doing so are inflexible and inefficient. If an application wishes to larger physical memory than bits allow, if it maintain pointer-based data structures beyond process lifetimes, or share large amounts across simultaneously executing processes, legacy interfaces managing space cumbersome often incur excessive overheads. We propose a new operating system design that promotes spaces...
In this paper, we argue that an operating system structured as a distributed needs coordination and name service to make OS services work correctly. While structure allows applying algorithms from the field, it also suffers similar problems like synchronization, naming, locking of instances.
Memory-centric computing demands careful organization of the virtual address space, but traditional methods for doing so are inflexible and inefficient. If an application wishes to larger physical memory than bits allow, if it maintain pointer-based data structures beyond process lifetimes, or share large amounts across simultaneously executing processes, legacy interfaces managing space cumbersome often incur excessive overheads. We propose a new operating system design that promotes spaces...
Effective coordination and synchronization between processes remains a challenge that becomes even more important with the rise of multicore hardware. This thesis introduces Octopus, service for Barrelfish operating system. Octopus addresses problem concurrent or activities in Barrelfish. The design is influenced by ideas from distributed computing. We show these are transferrable to systems. used declarative, logic programming engine implement parts evaluate benefits drawbacks this...
Memory-centric computing demands careful organization of the virtual address space, but traditional methods for doing so are inflexible and inefficient. If an application wishes to larger physical memory than bits allow, if it maintain pointer-based data structures beyond process lifetimes, or share large amounts across simultaneously executing processes, legacy interfaces managing space cumbersome often incur excessive overheads. We propose a new operating system design that promotes spaces...
Rust is the first practical programming language that has potential to provide fine-grained isolation of untrusted computations at level. A combination zero-overhead safety, i.e., safety without a managed runtime and garbage collection, unique ownership discipline enable in systems with tight performance budgets, e.g., databases, network processing frameworks, browsers, even operating system kernels.
High performance rack-scale offerings package disaggregated pools of compute, memory and storage hardware in a single rack to run diverse workloads with varying requirements, including applications that need low predictable latency. The intra-rack network is typically high speed Ethernet, which can suffer from congestion leading packet drops may not satisfy the stringent tail latency requirements for some (including remote memory/storage accesses). In this paper, we design Predictable Low...
In this paper, we rethink how an OS supports virtual memory. Classical VM is opaque abstraction of RAM, backed by demand paging. However, most systems today (from phones to data-centers) do not page, and indeed may require the performance benefits non-paged physical memory, precise NUMA allocation, etc. Moreover, MMU hardware now useful for other purposes, such as detecting page access or providing large translation. Accordingly, venerable in OSes like Windows Linux has acquired a plethora...