EXPLORING THE DESIGN SPACE: HIGH-SPEED INVESTIGATION WITH GRAPH NEURAL PROCEDURES
Main Article Content
Abstract
Adders are a crucial component of microprocessors' data channel logic; therefore, their design has been at the forefront of VLSI research for quite some time. While EDA flow helps designers get closer to an optimal adder architecture, it isn't always enough. The design space is huge, which is why this is the case. A machine learning-based strategy was offered in earlier studies as a means to investigate the design space. Weak feature representations and an inefficient two-stage learning loop cause prefix adder structures to underperform. A multi-branch framework that combines a variational graph autoencoder and a neural process (NP) is first demonstrated; this is the graph neural process.
Downloads
Metrics
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
References
Q. Guo, T. Chen, Y. Chen, Z.-H. Zhou, W. Hu, and Z. Xu, “Effective and efficient microprocessor design space
exploration using unlabeled design configurations,” in Proc. Int. Joint Conf. Artif. Intell. (IJCAI), 2011, pp. 1671–
D. Li, S. Yao, Y.-H. Liu, S. Wang, and X.-H. Sun, “Efficient design space exploration via statistical sampling
and adaboost learning,” in Proc. ACM/IEEE Design Autom. Conf. (DAC), 2016, pp. 1–6.
S. Roy, Y. Ma, J. Miao, and B. Yu, “A learning bridge from architectural synthesis to physical design for
exploring power efficient highperformance adders,” in Proc. IEEE Int. Symp. Low Power Electron. Design
(ISLPED), 2017, pp. 1–6.
C. Lo and P. Chow, “Multi-fidelity optimization for high-level synthesis directives,” in Proc. Int. Conf. Field
Programmable Logic Appl. (FPL), 2018, pp. 272–279.
W. Lyu, F. Yang, C. Yan, D. Zhou, and X. Zeng, “Batch Bayesian optimization via multi-objective acquisition
ensemble for automated analog circuit design,” in Proc. Int. Conf. Machine Learning (ICML), 2018, pp. 3312–3320.
Murali Krishna G., Karthick G., Umapathi N. (2021) Design of Dynamic Comparator for Low-Power and HighSpeed Applications. In: Kumar A., Mozar S. (eds) ICCCE 2020. Lecture Notes in Electrical Engineering, vol 698.
Springer, Singapore.
M. Gori, G. Monfardini, and F Scarselli. A new model for learning in graph domains. IJCNN, 2:729–734, 2005.
F. Scarselli, S. L. Yong, M. Gori, M. abd Hagenbuchner, A. C. Tsoi, and M. Maggini. Graph neural networks
for ranking web pages. IEEE/WIC/ACM International Conference on Web Intelligence, pages 666–672, 2005.
Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. ICLR, 2016. [10] Jie
Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review
of methods and applications.
Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz
Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song,
Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston,
Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan
Pascanu. Relational inductive biases, deep learning, and graph networks. https://arxiv.org/abs/1806.01261, 2018.
Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep
learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.
Emre Aksan and Otmar Hilliges. 2019. STCN: Stochastic Temporal Convolutional Networks. In International
Conference on Learning Representations, ICLR.
Gabriel Appleby, Linfeng Liu, and Li-Ping Liu. 2020. Kriging convolutional networks. In The Thirty-Fourth
Conference on Artificial Intelligence AAAI, Vol. 34. 3187–3194.
Saikrishna, D., Umapathi, N., & Mothe, S. (2022). Delays in the Generation of Test Patterns and in the Selection
of Critical Paths. Specialusis Ugdymas, 2(43), 2986-2997.
Di Chai, Leye Wang, and Qiang Yang. 2018. Bike flow prediction with multi-graph convolutional networks.
In International Conference on Advances in Geographic Information Systems, SIGSPATIAL. 397–400.
Weiyu Cheng, Yanyan Shen, Yanmin Zhu, and Linpeng Huang. 2018. A neural attention model for urban air
quality inference: Learning the weights of monitoring stations. In AAAI. 2151–2158.
. Srinivas, L., & Umapathi, N. (2022, May). New realization of low area and high-performance Wallace tree
multipliers using booth recoding unit. In AIP Conference Proceedings (Vol. 2393, No. 1, p. 020221). AIP
Publishing LLC.
. Y. Ma, S. Roy, J. Miao, J. Chen, and B. Yu, “Cross-layer optimization for high speed adders: A pareto driven
machine learning approach,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 38, no. 12, p. 2298–
, Dec. 2019.
. Prasad, R., UmapathI, N., & Karthick, G. (2022). Error-Tolerant Computing Using Booth Squarer Design and
Analysis. Specialusis Ugdymas, 2(43), 2970-2985.
. Christopher M Bishop and Nasser M Nasrabadi. 2006. Pattern recognition and machine learning. Springer.