Domain Transform Filter and Spatial-Aware Collaborative Representation for Hyperspectral Image Classification Using Few Labeled Samples


KARACA A. C.

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, cilt.18, sa.7, ss.1264-1268, 2021 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 18 Sayı: 7
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1109/lgrs.2020.2998605
  • Dergi Adı: IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Aquatic Science & Fisheries Abstracts (ASFA), Communication Abstracts, Compendex, Geobase, INSPEC, Metadex, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.1264-1268
  • Anahtar Kelimeler: Transforms, Training, Kernel, Collaboration, Hyperspectral imaging, Image edge detection, Collaborative representation (CR), domain transform filter (DF), hyperspectral image (HSI), spectral-spatial classification, FEATURE-EXTRACTION
  • Yıldız Teknik Üniversitesi Adresli: Hayır

Özet

Spectral-spatial classification methods based on filtering and collaborative representation (CR) have an upward trend in the classification of hyperspectral images. However, some of them may give poor performances when the number of training samples is limited. To overcome this problem, a classification method (DF+SACR) that includes domain transform filter (DF) and spatial-aware CR (SACR) is proposed in this letter. The proposed method first reduces the dimensionality. Next, DF that is an edge-aware smoothing method is performed on the reduced data set. In the final step, the output of DF is classified using an SACR method. Alternatively, a faster version of DF+SACR that utilizes superpixels instead of pixels is proposed as a second method in this letter. Experiments performed on the Indian Pines, Pavia University, University of Houston, and Salinas data sets demonstrate that the proposed methods not only improve the performance of the SACR method but also achieve higher classification accuracies and lower computation times than state-of-the-art methods at limited training samples.