EXPLORING THE DESIGN SPACE: HIGH-SPEED INVESTIGATION WITH GRAPH NEURAL PROCEDURES

Main Article Content

Y. Prakashrao
Dr. Venkatesan Selvaraj
P Jyothi Prakash Reddy
Arigela Naga Akhila

Abstract

Adders are a crucial component of microprocessors' data channel logic; therefore, their design has been at the forefront of VLSI research for quite some time. While EDA flow helps designers get closer to an optimal adder architecture, it isn't always enough. The design space is huge, which is why this is the case. A machine learning-based strategy was offered in earlier studies as a means to investigate the design space. Weak feature representations and an inefficient two-stage learning loop cause prefix adder structures to underperform. A multi-branch framework that combines a variational graph autoencoder and a neural process (NP) is first demonstrated; this is the graph neural process.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Article Details

How to Cite
Prakashrao, Y. ., Selvaraj, D. V. ., Reddy, P. J. P. ., & Akhila, A. N. . (2020). EXPLORING THE DESIGN SPACE: HIGH-SPEED INVESTIGATION WITH GRAPH NEURAL PROCEDURES. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 11(2), 1249–1257. https://doi.org/10.61841/turcomat.v11i2.14534
Section
Articles

References

Q. Guo, T. Chen, Y. Chen, Z.-H. Zhou, W. Hu, and Z. Xu, “Effective and efficient microprocessor design space

exploration using unlabeled design configurations,” in Proc. Int. Joint Conf. Artif. Intell. (IJCAI), 2011, pp. 1671–

D. Li, S. Yao, Y.-H. Liu, S. Wang, and X.-H. Sun, “Efficient design space exploration via statistical sampling

and adaboost learning,” in Proc. ACM/IEEE Design Autom. Conf. (DAC), 2016, pp. 1–6.

S. Roy, Y. Ma, J. Miao, and B. Yu, “A learning bridge from architectural synthesis to physical design for

exploring power efficient highperformance adders,” in Proc. IEEE Int. Symp. Low Power Electron. Design

(ISLPED), 2017, pp. 1–6.

C. Lo and P. Chow, “Multi-fidelity optimization for high-level synthesis directives,” in Proc. Int. Conf. Field

Programmable Logic Appl. (FPL), 2018, pp. 272–279.

W. Lyu, F. Yang, C. Yan, D. Zhou, and X. Zeng, “Batch Bayesian optimization via multi-objective acquisition

ensemble for automated analog circuit design,” in Proc. Int. Conf. Machine Learning (ICML), 2018, pp. 3312–3320.

Murali Krishna G., Karthick G., Umapathi N. (2021) Design of Dynamic Comparator for Low-Power and HighSpeed Applications. In: Kumar A., Mozar S. (eds) ICCCE 2020. Lecture Notes in Electrical Engineering, vol 698.

Springer, Singapore.

M. Gori, G. Monfardini, and F Scarselli. A new model for learning in graph domains. IJCNN, 2:729–734, 2005.

F. Scarselli, S. L. Yong, M. Gori, M. abd Hagenbuchner, A. C. Tsoi, and M. Maggini. Graph neural networks

for ranking web pages. IEEE/WIC/ACM International Conference on Web Intelligence, pages 666–672, 2005.

Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. ICLR, 2016. [10] Jie

Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review

of methods and applications.

Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz

Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song,

Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston,

Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, and Razvan

Pascanu. Relational inductive biases, deep learning, and graph networks. https://arxiv.org/abs/1806.01261, 2018.

Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep

learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18–42, 2017.

Emre Aksan and Otmar Hilliges. 2019. STCN: Stochastic Temporal Convolutional Networks. In International

Conference on Learning Representations, ICLR.

Gabriel Appleby, Linfeng Liu, and Li-Ping Liu. 2020. Kriging convolutional networks. In The Thirty-Fourth

Conference on Artificial Intelligence AAAI, Vol. 34. 3187–3194.

Saikrishna, D., Umapathi, N., & Mothe, S. (2022). Delays in the Generation of Test Patterns and in the Selection

of Critical Paths. Specialusis Ugdymas, 2(43), 2986-2997.

Di Chai, Leye Wang, and Qiang Yang. 2018. Bike flow prediction with multi-graph convolutional networks.

In International Conference on Advances in Geographic Information Systems, SIGSPATIAL. 397–400.

Weiyu Cheng, Yanyan Shen, Yanmin Zhu, and Linpeng Huang. 2018. A neural attention model for urban air

quality inference: Learning the weights of monitoring stations. In AAAI. 2151–2158.

. Srinivas, L., & Umapathi, N. (2022, May). New realization of low area and high-performance Wallace tree

multipliers using booth recoding unit. In AIP Conference Proceedings (Vol. 2393, No. 1, p. 020221). AIP

Publishing LLC.

. Y. Ma, S. Roy, J. Miao, J. Chen, and B. Yu, “Cross-layer optimization for high speed adders: A pareto driven

machine learning approach,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 38, no. 12, p. 2298–

, Dec. 2019.

. Prasad, R., UmapathI, N., & Karthick, G. (2022). Error-Tolerant Computing Using Booth Squarer Design and

Analysis. Specialusis Ugdymas, 2(43), 2970-2985.

. Christopher M Bishop and Nasser M Nasrabadi. 2006. Pattern recognition and machine learning. Springer.