Nayana Prasad Nagendra

ORCID: 0000-0003-3972-0497
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Parallel Computing and Optimization Techniques
  • Advanced Data Storage Technologies
  • Distributed systems and fault tolerance
  • Cloud Computing and Resource Management
  • Security and Verification in Computing
  • Physical Unclonable Functions (PUFs) and Hardware Security
  • Advanced Memory and Neural Computing

American Rock Mechanics Association
2023

Princeton University
2018-2020

The large instruction working sets of private and public cloud workloads lead to frequent cache misses costs in the millions dollars. While prior work has identified growing importance this problem, date, there been little analysis where come from, what opportunities are improve them. To address challenge, paper makes three contributions. First, we present design deployment a new, always-on, fleet-wide monitoring system, AsmDB, that tracks front-end bottlenecks. AsmDB uses hardware support...

10.1145/3307650.3322234 article EN 2019-06-14

For decades, architects have designed cache replacement policies to reduce misses. Since not all misses affect processor performance equally, researchers also proposed focused on reducing the total miss cost rather than count. However, prior cost-aware been specifically for data caching and are either inappropriate or unnecessarily complex instruction caching. This paper presents EMISSARY, first family of Observing that modern architectures entirely tolerate many misses, EMISSARY resists...

10.1145/3579371.3589097 article EN 2023-06-16

Software security techniques rely on correct execution by the hardware. Securing hardware components has been challenging due to their complexity and proportionate attack surface they present during design, manufacture, deployment, operation. Recognizing that external communication represents one of greatest threats a system's security, this paper introduces TrustGuard containment architecture. contains malicious erroneous behavior using relatively simple pluggable gatekeeping component...

10.1145/3297858.3304020 article EN 2019-04-04

It is well known that the datacenters hosting today's cloud services waste a significant number of cycles on front-end stalls. However, prior work has provided little insights about source these stalls and how to address them. This analyzes cause instruction cache misses at fleet-wide scale proposes new compiler-driven software code prefetching strategy reduce caches by 90%.

10.1109/mm.2020.2986212 article EN IEEE Micro 2020-04-16

Speculation with transactional memory systems helps pro- grammers and compilers produce profitable thread-level parallel programs. Prior work shows that supporting transactions can span multiple threads, rather than requiring be contained within a single thread, enables new types of speculative parallelization techniques for both programmers parallelizing compilers. Unfortunately, software support multi-threaded (MTXs) comes significant additional inter-thread communication overhead...

10.1145/3173162.3173172 article EN 2018-03-19

Speculation with transactional memory systems helps pro- grammers and compilers produce profitable thread-level parallel programs. Prior work shows that supporting transactions can span multiple threads, rather than requiring be contained within a single thread, enables new types of speculative parallelization techniques for both programmers parallelizing compilers. Unfortunately, software support multi-threaded (MTXs) comes significant additional inter-thread communication overhead...

10.1145/3296957.3173172 article EN ACM SIGPLAN Notices 2018-03-19
Coming Soon ...