Circuit Representation Learning with Masked Gate Modeling and Verilog-AIG Alignment

Representation Verilog
DOI: 10.48550/arxiv.2502.12732 Publication Date: 2025-02-18
ABSTRACT
Understanding the structure and function of circuits is crucial for electronic design automation (EDA). Circuits can be formulated as And-Inverter graphs (AIGs), enabling efficient implementation representation learning through graph neural networks (GNNs). Masked modeling paradigms have been proven effective in learning. However, masking augmentation to original will destroy their logical equivalence, which unsuitable circuit Moreover, existing masked often prioritize structural information at expense abstract such function. To address these limitations, we introduce MGVGA, a novel constrained paradigm incorporating gate (MGM) Verilog-AIG alignment (VGA). Specifically, MGM preserves equivalence by gates latent space rather than circuits, subsequently reconstructing attributes gates. Meanwhile, large language models (LLMs) demonstrated an excellent understanding Verilog code functionality. Building upon this capability, VGA performs operations on reconstructs under constraints equivalent codes, GNNs learn functions from LLMs. We evaluate MGVGA various logic synthesis tasks EDA show superior performance compared previous state-of-the-art methods. Our available https://github.com/wuhy68/MGVGA.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....