This paper is published in Volume-7, Issue-4, 2021
Area
Artificial Neural Networks
Author
Swathi N., Shreeraksha, Shruti A. P., Sparsha S. G., Farhana Kausar
Org/Univ
Atria Institute of Technology, Bengaluru, Karnataka, India
Pub. Date
02 July, 2021
Paper ID
V7I4-1145
Publisher
Keywords
Neural Network, Overfitting, Regularization, Dropout

Citationsacebook

IEEE
Swathi N., Shreeraksha, Shruti A. P., Sparsha S. G., Farhana Kausar. Analysis of Dropout in ANN using MNIST Dataset, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARIIT.com.

APA
Swathi N., Shreeraksha, Shruti A. P., Sparsha S. G., Farhana Kausar (2021). Analysis of Dropout in ANN using MNIST Dataset. International Journal of Advance Research, Ideas and Innovations in Technology, 7(4) www.IJARIIT.com.

MLA
Swathi N., Shreeraksha, Shruti A. P., Sparsha S. G., Farhana Kausar. "Analysis of Dropout in ANN using MNIST Dataset." International Journal of Advance Research, Ideas and Innovations in Technology 7.4 (2021). www.IJARIIT.com.

Abstract

The concept of Neural Networks is propelled by the neurons within the human brain and researchers needed a machine to imitate the same process. A Neural Network (NN) is a circuit of connected neurons, or in a present-day sense, an artificial neural network, composed of artificial neurons constructed for solving artificial intelligence problems. In the deep neural network, Overfitting is a severe issue. This issue could be caused by unbalanced datasets and incorrect model parameter initialization, which causes the model to adhere too closely to the training data and reduces the model's generalization performance for unknown data. To overcome such problems, Regularization techniques are used. This technique modifies the learning algorithm in a way that increases the model's generalization and performance. Dropout is one such regularization technique for addressing the overfitting problem. During the training, it randomly drops the hidden units or neurons to prevent the units or neurons from co-adaption. This method significantly reduces the overfitting and improves the performance of the neural network model. Dropout is preferred for large neural networks in order to have more randomness.