FISC-Net: Classification of Auscultation Sounds on an Edge Device


Mutlu E., Hüseyin V., SERBES G.

1st International Conference of Intelligent Methods, Systems and Applications, IMSA 2023, Giza, Egypt, 15 - 16 July 2023, pp.174-180 identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/imsa58542.2023.10217510
  • City: Giza
  • Country: Egypt
  • Page Numbers: pp.174-180
  • Keywords: auscultation, bowel, convolutional neural networks, edge device, heart, lung, Mel-spectrogram, mobile-net, tiny machine learning
  • Yıldız Technical University Affiliated: Yes

Abstract

Auscultation is widely used to detect internal organ specific conditions such as the heart, lung, and gastrointestinal diseases. However, the dependence on the experience of the physicians and the non-repeatability of the analysis of the received sounds are the significant disadvantages of traditional analog auscultation. Therefore, it is important to record organ sounds digitally followed by an automatic analysis of these sounds to overcome these limitations. To achieve a proper analysis of body sounds with optimum system settings, it is essential to specify the source organ first by using machine learning approaches. The proposed system differs from other body sound analysis pipelines due to the usage of TinyML (Tiny Machine Learning) concept for classifying organ sounds. In this study, Raspberry Pi 4 was preferred as the edge device due to its high computational capabilities and compatibility with the Python language, which makes it easier to perform deep learning algorithms with the TensorFlow library. Convolutional Neural Networks (CNN) were used to determine the source organ of the recorded sounds. The recorded sounds were imported into an embedded system, and classification was made using two different approaches. In the first approach, the frequency and time characteristics of the signals were obtained in two-dimensional form by extracting the mel-spectrogram representation of the signals. Afterwards, two-dimensional (2D) CNN was applied to these time-frequency images. In the second approach, pre-trained CNN models were applied to the obtained images and their findings were used for comparison. Finally, the accuracy of all CNN models for the test set were presented using relevant performance comparison metrics. According to the obtained results, the proposed model architecture has achieved almost perfect performance on the classification of organ sounds with the least computational cost, which makes it a perfect option for edge device applications.