COMPARATIVE ANALYSIS OF REGULARIZATION METHODS IN DEEP NEURAL NETWORKS FOR OVERFITTING CONTROL
Keywords:
Overfitting, Deep Learning, Regularization, Dropout, Batch Normalization, Weight Decay, Structured DropoutSynopsis
Purpose: Deep neural networks (DNNs) are highly expressive models but highly prone to overfitting due to model complexity and limited data. This paper compares major regularization techniques for controlling overfitting. Design/Methodology/Approach: We review seminal works on regularization methods such as dropout, weight penalties (L1/L2), batch normalization, and structured dropout variants, and evaluate their theoretical foundations and impacts on generalization. Findings: Dropout and batch normalization consistently improve generalization across architectures, while advanced structured dropout variants enhance performance further in certain contexts. Practical Implications: Understanding differences guides practitioners in selecting appropriate regularization strategies depending on dataset size and network depth. Originality/Value: This comparative analysis synthesizes findings across key original research papers, offering consolidated insights for deep learning model optimization.
References
[1] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56), 1929–1958.
[2] Gummadi, V. P. K. (2023). MuleSoft batch processing: High-volume streaming architecture. Computer Fraud & Security, 2023(12), 50–57. https://doi.org/10.52710/cfs.886
[3] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on Machine Learning.
[4] Hou, S., et al. (2019). Weighted channel dropout for regularization of deep convolutional neural networks. AAAI.
[5] Inoue, H. (2019). Multi sample dropout for accelerated training and better generalization. arXiv.
[6] Salehinejad, H., & Valaee, S. (2019). Ising Dropout: A regularization method for training and compression of deep neural networks. arXiv.
[7] Pereyra, G., et al. (2017). Regularizing neural networks by penalizing confident output distributions. arXiv.
[8] Zhang, C. (2016). Understanding deep learning requires rethinking generalization. arXiv (cited in overview sources).
[9] Wan, L., Zeiler, M., Zhang, S., LeCun, Y., & Fergus, R. (2013). Regularization of neural networks using DropConnect. Proceedings of ICML.
[10] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. (Discussion of overfitting and regularization).
[11] Nowlan, S. J., & Hinton, G. E. (1992). Simplifying neural networks by soft weight sharing. Neural Computation. (Introduced concepts of weight regularization foundational in deep
learning).
Published
Series
Categories
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.