IEEE Access, cilt.14, ss.27675-27689, 2026 (SCI-Expanded, Scopus)
This paper describes a study on developing a novel, vision-based autonomous landing system for fixed-wing Unmanned Aerial Vehicles (UAVs), primarily optimized for GPS-denied environments, using visual data and optimal control approaches. First, the algorithm uses Visual Simultaneous Localization and Mapping (vSLAM) to determine the UAV’s exact location and build a map. A key innovation to enhance this process is the use of an Singular Value Decomposition (SVD)-aided Kalman filter within vSLAM, which significantly improves map point update accuracy and efficiency by reducing noise. The system precisely defines the landing area using image segmentation and Watershed Transform for real-time vSLAM data, and find important landmarks. This visual data feed the linearized Model Redictive Control (MPC), which computes the optimal control inputs-longitudinal acceleration, yaw rate, and vertical velocity to guide the UAV along the landing trajectory. While both MPC and Linear Quadratic Regulator (LQR) were tested and showed successful guidance capabilities, simulation results confirm the effective performance of our integrated vSLAM-MPC architecture in precisely guiding the UAV to the landing zone.