Stefan Abi-Karam

ORCID: 0000-0002-6697-8517
Publications
Citations
Views
---
Saved
---
About
Contact & Profiles
Research Areas
  • Advanced Graph Neural Networks
  • Machine Learning in Materials Science
  • Parallel Computing and Optimization Techniques
  • Advanced Neural Network Applications
  • Scientific Computing and Data Management
  • Advanced Memory and Neural Computing
  • Graph Theory and Algorithms
  • Distributed and Parallel Computing Systems
  • Natural Language Processing Techniques
  • Domain Adaptation and Few-Shot Learning
  • Embedded Systems Design Techniques
  • Advanced Data Storage Technologies
  • Topic Modeling
  • CCD and CMOS Imaging Sensors
  • Software Testing and Debugging Techniques
  • Neural Networks and Applications
  • Human Pose and Action Recognition

Georgia Institute of Technology
2023-2024

Georgia Tech Research Institute
2023-2024

Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability graph-related problems such as quantum chemistry, drug discovery, and high energy physics. However, meeting demand for novel GNN models fast inference simultaneously is challenging due the gap between developing efficient accelerators rapid creation of new models. Prior art focuses on accelerating specific classes GNNs, Convolutional Networks (GCN), but lacks generality support a wide range...

10.1109/hpca56546.2023.10071015 article EN 2023-02-01

Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability ubiquitous graph-related problems such as quantum chemistry, drug discovery, and high energy physics. However, meeting demand for novel GNN models fast inference simultaneously is challenging because of the gap between difficulty developing efficient FPGA accelerators rapid pace creation new models. Prior art focuses on acceleration specific classes GNNs but lacks generality work across...

10.48550/arxiv.2201.08475 preprint EN other-oa arXiv (Cornell University) 2022-01-01

There are plenty of graph neural network (GNN) accelerators being proposed. However, they highly rely on users' hardware expertise and usually optimized for one specific GNN model, making them challenging practical use. Therefore, in this work, we propose GNNBuilder, the first automated, generic, end-to-end accelerator generation framework. It features four advantages: (1) GNNBuilder can automatically generate a wide range models arbitrarily defined by users; (2) takes standard PyTorch...

10.1109/fpl60245.2023.00037 article EN 2023-09-04

An increasing number of researchers are finding use for nth-order gradient computations a wide variety applications, including graphics, meta-learning (MAML), scientific computing, and most recently, implicit neural representations (INRs). Recent work shows that the an INR can be used to edit data it represents directly without needing convert back discrete representation. However, given function represented as computation graph, traditional architectures face challenges in efficiently...

10.1109/iccad57390.2023.10323650 article EN 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 2023-10-28

Machine learning (ML) techniques have been applied to high-level synthesis (HLS) flows for quality-of-result (QoR) prediction and design space exploration (DSE). Nevertheless, the scarcity of accessible high-quality HLS datasets complexity building such present challenges. Existing limitations in terms benchmark coverage, enumeration, vendor extensibility, or lack reproducible extensible software dataset construction. Many works also user-friendly ways add more designs, limiting wider...

10.48550/arxiv.2405.00820 preprint EN arXiv (Cornell University) 2024-05-01

Recent machine learning (ML) models have advanced from single-modality single-task to multi-modality multi-task (MMMT). MMMT typically multiple backbones of different sizes along with complicated connections, exposing great challenges for hardware deployment. For scalable and energy-efficient implementations, multi-FPGA systems are emerging as the ideal design choices. However, finding optimal solutions mapping onto FPGAs is non-trivial. Existing algorithms focus on either streamlined linear...

10.23919/date56975.2023.10136962 article EN Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015 2023-04-01

The increasing complexity of modern hardware designs makes it difficult for the developer to easily design, validate, and test ideas while creating RTL (Register Transfer Level) logic. HLS (High-Level Synthesis) has been introduced as a tool design complex specialized such machine learning accelerators, processors, other FPGA on behavioral level, significantly reducing size time typically necessary undertakings. However, compilation process semantic gap between code synthesized make...

10.1109/orss58323.2023.10161946 article EN 2023-04-23

Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability graph-related problems such as quantum chemistry, drug discovery, and high energy physics. However, meeting demand for novel GNN models fast inference simultaneously is challenging due the gap between developing efficient accelerators rapid creation of new models. Prior art focuses on accelerating specific classes GNNs, Convolutional Networks (GCN), but lacks generality support a wide range...

10.48550/arxiv.2204.13103 preprint EN other-oa arXiv (Cornell University) 2022-01-01

There are plenty of graph neural network (GNN) accelerators being proposed. However, they highly rely on users' hardware expertise and usually optimized for one specific GNN model, making them challenging practical use. Therefore, in this work, we propose GNNBuilder, the first automated, generic, end-to-end accelerator generation framework. It features four advantages: (1) GNNBuilder can automatically generate a wide range models arbitrarily defined by users; (2) takes standard PyTorch...

10.48550/arxiv.2303.16459 preprint EN other-oa arXiv (Cornell University) 2023-01-01

An increasing number of researchers are finding use for nth-order gradient computations a wide variety applications, including graphics, meta-learning (MAML), scientific computing, and most recently, implicit neural representations (INRs). Recent work shows that the an INR can be used to edit data it represents directly without needing convert back discrete representation. However, given function represented as computation graph, traditional architectures face challenges in efficiently...

10.48550/arxiv.2308.05930 preprint EN other-oa arXiv (Cornell University) 2023-01-01
Coming Soon ...