- Advanced Graph Neural Networks
- Parallel Computing and Optimization Techniques
- Scientific Computing and Data Management
- Advanced Memory and Neural Computing
- Advanced Data Storage Technologies
- Advanced Neural Network Applications
- Algorithms and Data Compression
- Distributed and Parallel Computing Systems
- Human Pose and Action Recognition
- Graph Theory and Algorithms
- Ferroelectric and Negative Capacitance Devices
- Advanced Image and Video Retrieval Techniques
- Caching and Content Delivery
- Groundwater flow and contamination studies
- Dam Engineering and Safety
- Neural Networks and Applications
- Adversarial Robustness in Machine Learning
- Software Testing and Debugging Techniques
- Landslides and related hazards
- Neural Networks and Reservoir Computing
- Molecular Communication and Nanonetworks
- VLSI and Analog Circuit Testing
- Generative Adversarial Networks and Image Synthesis
- Video Surveillance and Tracking Methods
- IoT and Edge/Fog Computing
Chongqing Jiaotong University
2025
Georgia Institute of Technology
2023-2024
With the ever-increasing hardware design complexity comes realization that efforts required for verification increase at an even faster rate. Driven by push from desired productivity boost and pull leap-ahead capabilities of machine learning (ML), recent years have witnessed emergence exploiting ML-based techniques to improve efficiency verification. In this article, we present a panoramic view how are embraced in verification, formal simulation-based academia industry, current progress...
The objective of seepage pressure monitoring earth and rock dams is to predict in order avoid potential risks. However, existing models for predicting do not account the numerous nonlinearities between factors that influence it. These lack accuracy generalizability required effective risk management. In address this issue, paper puts forth a methodology prediction dams. This based on use Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), an attention mechanism. method...
Dynamic Graph Neural Networks (DGNNs) are becoming increasingly popular due to their effectiveness in analyzing and predicting the evolution of complex interconnected graph-based systems. However, hardware deployment DGNNs still remains a challenge. First, do not fully utilize resources because temporal data dependencies cause low parallelism. Additionally, there is currently lack generic DGNN accelerator frameworks, existing GNN frameworks have limited ability handle dynamic graphs with...
Machine learning (ML) techniques have been applied to high-level synthesis (HLS) flows for quality-of-result (QoR) prediction and design space exploration (DSE). Nevertheless, the scarcity of accessible high-quality HLS datasets complexity building such present challenges. Existing limitations in terms benchmark coverage, enumeration, vendor extensibility, or lack reproducible extensible software dataset construction. Many works also user-friendly ways add more designs, limiting wider...
Edge computing is a distributed paradigm that collects and processes data at or near the source of generation. The on-device learning edge relies on device-to-device wireless communication to facilitate real-time sharing collaborative decision-making among multiple devices. This significantly improves adaptability system changing environments. However, as scale getting larger, devices becoming bottleneck because limited bandwidth leads large transfer latency. To reduce amount transmission...
Compute Express Link (CXL) emerges as a solution for wide gap between computational speed and data communication rates among host multiple devices. It fosters unified coherent memory space CXL storage devices such Solid-state drive (SSD) expansion, with corresponding DRAM implemented the device cache. However, this introduces challenges substantial cache miss penalties, sub-optimal caching due to access granularity mismatch "cache" SSD "memory", inefficient hardware management. To address...
Dynamic Graph Neural Networks (DGNNs) are becoming increasingly popular due to their effectiveness in analyzing and predicting the evolution of complex interconnected graph-based systems. However, hardware deployment DGNNs still remains a challenge. First, do not fully utilize resources because temporal data dependencies cause low parallelism. Additionally, there is currently lack generic DGNN accelerator frameworks, existing GNN frameworks have limited ability handle dynamic graphs with...
Implicit Neural Representation (INR) is an innovative approach for representing complex shapes or objects without explicitly defining their geometry surface structure. Instead, INR represents as continuous functions. Previous research has demonstrated the effectiveness of using neural networks image compression, showcasing comparable performance to traditional methods such JPEG. However, holds potential various applications beyond compression. This paper introduces Rapid-INR, a novel that...
Implicit Neural Representation (INR) is an innovative approach for representing complex shapes or objects without explicitly defining their geometry surface structure. Instead, INR represents as continuous functions. Previous research has demonstrated the effectiveness of using neural networks image compression, showcasing comparable performance to traditional methods such JPEG. However, holds potential various applications beyond compression. This paper introduces Rapid-INR, a novel that...
Dynamic graph neural network (DGNN) is becoming increasingly popular because of its widespread use in capturing dynamic features the real world. A variety networks designed from algorithmic perspectives have succeeded incorporating temporal information into processing. Despite promising performance, deploying DGNNs on hardware presents additional challenges due to model complexity, diversity, and nature time dependency. Meanwhile, differences between static make hardware-related...