ПРЕПРИНТ

Эта статья является препринтом и не была отрецензирована.
О результатах, изложенных в препринтах, не следует сообщать в СМИ как о проверенной информации.
Training Small Neural Networks on Limited Data: Regularization and Augmentation Methods
2025-12-12

Abstract This work addresses the problem of training small neural networks with limited training data. The research focuses on applying various regularization and data augmentation methods to improve the generalization ability of models. Experi- ments were conducted using convolutional neural networks on datasets with limited number of examples. Methods such as dropout, batch normalization, weight decay, as well as various image augmentation techniques are considered. Results show that a combination of regularization and augmentation methods allows achieving signifi- cant improvement in classification accuracy even when working with small datasets. The average accuracy improvement was about 12-15% compared to baseline models without applying special techniques. Keywords: neural networks, machine learning, regularization, data augmentation, overfitting, small datasets

Ссылка для цитирования:

Мусаев У. И. 2025. Training Small Neural Networks on Limited Data: Regularization and Augmentation Methods. PREPRINTS.RU. https://doi.org/10.24108/preprints-3114058

Список литературы