Ineffectual Neuron Free Deep Learning Accelerator
B. Nikhita Tanvi1, Maria Azees2, G. Suresh3

1B. Nikhita Tanvi, Department of Electronics and Communication Engineering, GMR Institute of Technology, GMR Nagar, Rajam, India.
2Maria Azees, Department of Electronics and Communication Engineering, GMR Institute of Technology, GMR Nagar, Rajam, India.
3G. Suresh, Department of Electronics and Communication Engineering, GMR Institute of Technology, GMR Nagar, Rajam, India.

Manuscript received on 29 June 2019 | Revised Manuscript received on 05 July 2019 | Manuscript published on 30 July 2019 | PP: 2819-2826 | Volume-8 Issue-9, July 2019 | Retrieval Number: I8664078919/19©BEIESP | DOI: 10.35940/ijitee.I8664.078919

Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Convolutional neural network (CNN) is actually a deep neural network which plays an important role in image recognition. The CNN recognizes images similar to visual cortex in our eyes. In this proposed work, an accelerator is used for high efficient convolutional computations. The main aim of using the accelerator is to avoid ineffectusal computations and to improve performance and energy efficiency during image recognition without any loss in accuracy. However, the throughput of the accelerator is improved by adding max-pooling function only. Since the CNN includes multiple inputs and intermediate weights for its convolutional computation, the computational complexity is increased enormously. Hence, to reduce the computational complexity of the CNN, a CNN accelerator is proposed in this paper. The accelerator design is simulated and synthesized in Cadence RTL compiler tool with 90nm technology library.
Keywords: Artificial Intelligence, Convolution neural Networks, Deep Neural Networks, Hardware Accelerator.

Scope of the Article: Artificial Intelligence,