Anomaly localization in regular textures based on deep convolutional generative adversarial networks


Öz M. A. N., Mercimek M., Kaymakçı Ö. T.

Applied Intelligence, cilt.52, sa.2, ss.1556-1565, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 52 Sayı: 2
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1007/s10489-021-02475-3
  • Dergi Adı: Applied Intelligence
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, PASCAL, ABI/INFORM, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, Educational research abstracts (ERA), INSPEC, Library, Information Science & Technology Abstracts (LISTA), zbMATH
  • Sayfa Sayıları: ss.1556-1565
  • Anahtar Kelimeler: Deep learning, Generative adversarial network, Machine vision, Anomaly detection, DEFECT DETECTION, CLASSIFICATION, FRAMEWORK
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.Pixel-level anomaly localization is a challenging problem due to the lack of abnormal training samples. The existing adversarial network methods attempt to segment anomalies by reconstructing the image then comparing the reconstructed image with the original. However, reconstructing an image with adversarial networks involve complex training procedures and result in long run-times. This paper proposes a simpler and intuitive anomaly localization approach based on generative adversarial networks (GAN) for regular textured images. In the proposed method, a discriminator network generates an anomaly map and is trained by a generator network that generates imitations of anomalous samples. To lower computational costs, strided convolutions are used in the discriminator network to produce anomaly map for pixel blocks instead of individual pixels. Discriminator that is trained in the proposed scheme gains ability to segment the anomalies in images. The experimental results show that the performance of the proposed method is almost equivalent to that of the state-of-the-art methods. Besides, with an accompanying low-cost training phase it is faster and simpler to implement.