Video scene detection using graph-based representations


Sakarya U., Telatar Z.

SIGNAL PROCESSING-IMAGE COMMUNICATION, cilt.25, sa.10, ss.774-783, 2010 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 25 Sayı: 10
  • Basım Tarihi: 2010
  • Doi Numarası: 10.1016/j.image.2010.10.001
  • Dergi Adı: SIGNAL PROCESSING-IMAGE COMMUNICATION
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.774-783
  • Yıldız Teknik Üniversitesi Adresli: Hayır

Özet

One of the fundamental steps in organizing videos is to parse it in smaller descriptive parts. One way of realizing this step is to obtain shot or scene information. One or more consecutive semantically correlated shots sharing the same content construct video scenes. On the other hand, video scenes are different from the shots in the sense of their boundary definitions; video scenes have semantic boundaries and shots are defined with physical boundaries. In this paper, we concentrate on developing a fast, as well as well-performed video scene detection method. Our graph partition based video scene boundary detection approach, in which multiple features extracted from the video, determines the video scene boundaries through an unsupervised clustering procedure. For each video shot to shot comparison feature, a one-dimensional signal is constructed by graph partitions obtained from the similarity matrix in a temporal interval. After each one-dimensional signal is filtered, an unsupervised clustering is conducted for finding video scene boundaries. We adopt two different graph-based approaches in a single framework in order to find video scene boundaries. The proposed graph-based video scene boundary detection method is evaluated and compared with the graph-based video scene detection method presented in literature. (C) 2010 Elsevier B.V. All rights reserved.