ПРЕПРИНТ
О результатах, изложенных в препринтах, не следует сообщать в СМИ как о проверенной информации.
This paper looks at some current ways to make machine learning models better, especially by using Perturbed Equations and Ablation Techniques. We go over the work from Usupova and Khan in 2025, along with Rakimbekuulu and others from 2024. Then we suggest a broader setup for using these ideas to speed up how neural networks train. The paper also talks about ways to automate finding the best setups for architectures and training settings. Our mixed method pulls together perturbation steps with ablation checks to make models more steady and less demanding on computing power. Overall, this work gives some useful thoughts on blending these tools for smoother deep learning processes.
Манляза К. А., Маасалиев Э. Ж. 2025. Optimizing Neural Network Training Using Perturbed Equations and Ablation Techniques. PREPRINTS.RU. https://doi.org/10.24108/preprints-3114055