2025 Innovations in Intelligent Systems and Applications Conference, ASYU 2025, Bursa, Türkiye, 10 - 12 Eylül 2025, (Tam Metin Bildiri)
Deep neural networks are typically trained using gradient-based optimization algorithms, with the goal of reaching a minimum in the parameter space. However, the resulting solutions often converge to local, rather than global, minima. Recent studies in the literature have shown that non-linear interpolation paths between independently trained models in the same parameter space can preserve test performance. This study goes a step further and demonstrates that it is possible to find even better-performing points (models) along these interpolation curves. Experiments conducted using various architectures and datasets show that Bezier curve interpolation can yield solutions with lower loss, especially near the endpoints of the curve, outperforming the original trained models at those points. Furthermore, it is shown that iterative application of the proposed approach can lead to progressively better models. These findings suggest that geometric search strategies defined by parametric curves may enhance traditional gradient-based methods used in neural network optimization, highlighting a promising and exciting direction for future research.