DESIGN AND IMPLEMENTATION OF LOW AREA AND HIGH SPEED MODIFIED DLAU ARCHITECTURE ON FPGA
Main Article Content
Abstract
At present days, artificial intelligence plays an prominent role in the digital world. Machine
learning and deep learning are used to solve the complex problems which are facing in the
artificial intelligence. In artificial intelligence neural networks are the basic building blocks to
operate any operation. Hence high speed and energy efficient deep learning neural networks
are needed. To achieve this scalable deep learning accelerator unit (DLAU) is implemented
for large scale architectures. The proposed DLAU used in the carry save adder for the
calculation process and to verify the performance analysis. The experimental results shows
that the proposed design achieves high speed and the performance is high compared to the
other architectures
Downloads
Metrics
Article Details
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.