本期“至善芯语”集成电路系列讲座邀请到乔治华盛顿大学吴楠教授为我们做“Graph Representation Learning for Circuit Designs: Predictors, Optimizers, and Beyond”相关报告与交流,欢迎各位行业同仁、研究生们参会并交流。
讲座信息
报告人:乔治华盛顿大学
吴楠
主持人:香港中文大学
徐强
主题:Graph Representation Learning for Circuit Designs: Predictors, Optimizers, and Beyond
时间:2024年12月17日(周二)10:30-12:00
地点:EDA国创中心402会议室
(南京市江北新区星火路17号创智大厦B座)
线上:#腾讯会议:259-530-103
嘉宾介绍
Nan Wu
George Washington University
Nan Wu is an Assistant Professor in the ECE department at George Washington University. Her research focuses on the intersection of computer architecture, EDA, and ML, aiming to facilitate hardware agile development empowered by ML. She earned her Ph.D. in ECE from the University of California, Santa Barbara, under the supervision of Dr. Yuan Xie, and holds dual B.S. degrees in Electronic Engineering and Economics from Tsinghua University. She has received the EDAA Outstanding Dissertation Award in 2024, Best Paper Awards at DAC 2023 and GLSVLSI 2021, and a Best Paper Nomination at ASAP 2022.
报告摘要
With the ever-increasing applications comes the realization that efforts and complexity for developing hardware to keep pace with such compute demands are growing at an even faster rate. As the target cadence of Moore's law is already slipping, more burden is placed on the design methodology to achieve “equivalent scaling”. Given that circuits can naturally be represented as graphs, this talk explores the transformative potential of graph representation learning in circuit design, including (1) fast and accurate design evaluation, (2) efficient and scalable design optimization, and (3) high-quality and productive design verification. Moving beyond conventional graph learning applications, we delve into directed graph representation learning (DGRL), benchmarking its performance and providing insights into effectively leveraging bi-directional message-passing and stable positional encoding to enhance the expressiveness of DGRL models tailored for circuit graphs. The accompanying benchmark, built on a modular codebase, streamlines the evaluation process, making it accessible and valuable to both hardware and machine learning practitioners.