
The Concurrent and Resilient Systems Lab (CORES) is a research lab, led by Prof. Michal Friedman, in the Systems Group of the Department of Computer Science (D-INFK) at ETH Zurich. Our research is shaped by the new hardware era, driving the next generation of computing through the development of fundamental building blocks for modern systems. By leveraging software and hardware co-design, we build efficient, correct, and resilient concurrent algorithms, data structures, and systems that unlock the full potential of emerging technologies.
Projects
Architectural Primitives Design
Our research explores extending the instruction set architecture (ISA) with new primitives designed for modern concurrent systems and workloads that access both local and remote data. We study how specialized instructions can simplify synchronization, coordination, and data movement across distributed and heterogeneous memory environments. Building on these capabilities, we redesign existing algorithms and system software to better leverage these primitives. The goal is to make concurrent algorithms easier to implement, more efficient, and simpler to reason about in terms of correctness.
Memory Management Across Complex Memory Hierarchies
Modern systems increasingly rely on complex, multi-tier memory hierarchies that include both local and disaggregated memory enabled by technologies such as Compute Express Link (CXL). Our research studies how to efficiently manage and allocate data across these heterogeneous memory resources while addressing challenges in performance, consistency, and remote memory access. We investigate programming abstractions, data structures, and runtime techniques that simplify memory management across emerging memory technologies.
Fault Tolerance for Large-Scale Applications
Our group investigates fault tolerance for large-scale applications, focusing on how new memory technologies and cheaper data persistence affect data management. Our work focuses on modeling emerging memory and storage technologies to better understand their behavior under different workloads and usage scenarios and introducing new instructions, hardware, and software techniques to support durable, consistent, and performant systems. This allows us to design and implement solutions with various levels of persistence guarantees that ensure data remains reliably stored across crashes, and unexpected failures.
Hardware-Accelerated Algorithms
We investigate algorithmic redesign for modern heterogeneous hardware, exploring how components such as GPUs and FPGAs can accelerate core system operations. Rather than relying solely on traditional CPU-centric approaches, we rethink algorithms to better exploit parallelism, specialized hardware capabilities, and modern system architectures. Our work studies how these redesigns can reduce overheads and improve performance for demanding real-world workloads, including system-level tasks such as memory management and data processing.
Energy Friendly Systems
Our research focuses on energy-efficient computing across the full system stack, from hardware and computing platforms to software and cloud services. We study how architectural choices, data structures, and system designs influence the energy consumption and carbon footprint in real-world workloads. By measuring and analyzing energy usage across system layers, we aim to identify the components and design decisions that have the greatest impact on efficiency. Our goal is to develop principles and build more sustainable, resource-efficient computing systems.