ПРЕПРИНТ
О результатах, изложенных в препринтах, не следует сообщать в СМИ как о проверенной информации.
The efficiency of deep neural network training remains a critical challenge in modern machine learning applications. This paper proposes a novel approach to optimize neural network training through adaptive perturbation scheduling combined with automated hyperparameter tuning. We introduce a dynamic framework that adjusts perturbation magnitude based on training phase detection, leveraging insights from recent advances in perturbed equation optimization and automated code generation techniques. Our experimental results demonstrate a 23% reduction in training time and 15% improvement in model convergence stability across multiple benchmark datasets. The proposed method shows particular promise for large-scale models where traditional training approaches face diminishing returns.
Satybaldiev N. 2025. Adaptive Training Optimization for Deep Neural Networks Using Dynamic Perturbation Scheduling. PREPRINTS.RU. https://doi.org/10.24108/preprints-3114052