Nils Blach

ORCID: 0009-0003-0401-0388
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Interconnection Networks and Systems
  • Advanced Graph Neural Networks
  • Graph Theory and Algorithms
  • Topic Modeling
  • Natural Language Processing Techniques
  • Cloud Computing and Resource Management
  • Parallel Computing and Optimization Techniques
  • Software-Defined Networks and 5G
  • Network Packet Processing and Optimization
  • Embedded Systems Design Techniques
  • Data Quality and Management
  • Machine Learning in Healthcare
  • Semantic Web and Ontologies
  • Advanced Memory and Neural Computing
  • DNA and Biological Computing

ETH Zurich
2020-2024

We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree (ToT). The key idea and primary advantage GoT is the ability to model information generated an LLM arbitrary graph, where units ("LLM thoughts") are vertices, edges correspond dependencies between these vertices. This approach enables combining thoughts into synergistic outcomes, distilling essence whole...

10.1609/aaai.v38i16.29720 article EN Proceedings of the AAAI Conference on Artificial Intelligence 2024-03-24

Simple graph algorithms such as PageRank have been the target of numerous hardware accelerators. Yet, there also exist much more complex mining for problems clustering or maximal clique listing. These are memory-bound and thus could be accelerated by techniques Processing-in-Memory (PIM). However, they come with non-straightforward parallelism complicated memory access patterns. In this work, we address problem a simple yet surprisingly powerful observation: operations on sets vertices,...

10.1145/3466752.3480133 article EN 2021-10-17

We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree (ToT). The key idea and primary advantage GoT is the ability to model information generated an LLM arbitrary graph, where units ("LLM thoughts") are vertices, edges correspond dependencies between these vertices. This approach enables combining thoughts into synergistic outcomes, distilling essence whole...

10.48550/arxiv.2308.09687 preprint EN other-oa arXiv (Cornell University) 2023-01-01

The field of natural language processing (NLP) has witnessed significant progress in recent years, with a notable focus on improving large models' (LLM) performance through innovative prompting techniques. Among these, prompt engineering coupled structures emerged as promising paradigm, designs such Chain-of-Thought, Tree Thoughts, or Graph which the overall LLM reasoning is guided by structure graph. As illustrated numerous examples, this paradigm significantly enhances LLM's capability to...

10.48550/arxiv.2401.14295 preprint EN other-oa arXiv (Cornell University) 2024-01-01

Graph databases (GDBs) are crucial in academic and industry applications.The key challenges developing GDBs achieving high performance, scalability, programmability, portability.To tackle these challenges, we harness established practices from the HPC landscape to build a system that outperforms all past presented literature by orders of magnitude, for both OLTP OLAP workloads.For this, first identify crystallize performance-critical building blocks GDB design, abstract them into portable...

10.1145/3581784.3607068 article EN 2023-10-30

Professionals in modern healthcare systems are increasingly burdened by documentation workloads. Documentation of the initial patient anamnesis is particularly relevant, forming basis successful further diagnostic measures. However, manually prepared notes inherently unstructured and often incomplete. In this paper, we investigate potential NLP techniques to support doctors matter. We present a dataset German monologues, formulate well-defined information extraction task under constraints...

10.48550/arxiv.2011.01696 preprint EN other-oa arXiv (Cornell University) 2020-01-01

With the rapid growth of machine learning (ML) workloads in datacenters, existing congestion control (CC) algorithms fail to deliver required performance at scale. ML traffic is bursty and bulk-synchronous thus requires quick reaction strong fairness. We show that CC use delay as a main signal react too slowly are not always fair. design SMaRTT, simple sender-based algorithm combines delay, ECN, optional packet trimming for fast precise window adjustments. At core SMaRTT lies novel...

10.48550/arxiv.2404.01630 preprint EN arXiv (Cornell University) 2024-04-02

In this paper, we present PolarStar, a novel family of diameter-3 network topologies derived from the star product two low-diameter factor graphs. The proposed PolarStar construction gives largest known for almost all radixes. When compared to state-of-the-art networks, achieves 31% geometric mean increase in scale over Bundlefly, 91% Dragonfly, and 690% 3-D HyperX. has many other desirable properties including modular layout, large bisection, high resilience link failures number feasible...

10.48550/arxiv.2302.07217 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Graph databases (GDBs) are crucial in academic and industry applications. The key challenges developing GDBs achieving high performance, scalability, programmability, portability. To tackle these challenges, we harness established practices from the HPC landscape to build a system that outperforms all past presented literature by orders of magnitude, for both OLTP OLAP workloads. For this, first identify crystallize performance-critical building blocks GDB design, abstract them into portable...

10.48550/arxiv.2305.11162 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Novel low-diameter network topologies such as Slim Fly (SF) offer significant cost and power advantages over the established Fat Tree, Clos, or Dragonfly. To spearhead adoption of networks, we design, implement, deploy, evaluate first real-world SF installation. We focus on deployment, management, operational aspects our test cluster with 200 servers carefully analyze performance. demonstrate techniques for simple cabling validation well a novel high-performance routing architecture...

10.48550/arxiv.2310.03742 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Many graph representation learning (GRL) problems are dynamic, with millions of edges added or removed per second. A fundamental workload in this setting is dynamic link prediction: using a history updates to predict whether given pair vertices will become connected. Recent schemes for prediction such settings employ Transformers, modeling individual as single tokens. In work, we propose HOT: model that enhances line works by harnessing higher-order (HO) structures; specifically, k-hop...

10.48550/arxiv.2311.18526 preprint EN other-oa arXiv (Cornell University) 2023-01-01

Simple graph algorithms such as PageRank have been the target of numerous hardware accelerators. Yet, there also exist much more complex mining for problems clustering or maximal clique listing. These are memory-bound and thus could be accelerated by techniques Processing-in-Memory (PIM). However, they come with nonstraightforward parallelism complicated memory access patterns. In this work, we address problem a simple yet surprisingly powerful observation: operations on sets vertices,...

10.48550/arxiv.2104.07582 preprint EN other-oa arXiv (Cornell University) 2021-01-01
Coming Soon ...