TRAITEMENT DU SIGNAL, cilt.41, sa.5, ss.2471-2483, 2024 (SCI-Expanded)
Traditional auscultation is used to determine certain pathological conditions related to internal organs utilizing cardiac, pulmonary, and intestinal sounds. However, this method relies heavily on the experience of the physician, which leads to non-repeatable subjective diagnosis. Automated analysis can be implemented by digitally recording organ sounds to address this limitation. The proposed system employs a convolutional neural network (CNN) model to determine the auscultated organ and subsequently applies digital filtering to the recorded raw signals based on the organ-specific frequency range. Additionally, the de-noised signals obtained can be transmitted to other smart devices via Bluetooth for further analysis. All the data acquisition, signal processing and learning steps were carried out in an embedded system, the Raspberry Pi 4 board. To achieve organ determination, the input of CNNs is obtained from the raw digital signals in the form of Mel-Spectrograms using the short time Fourier transform (STFT). The obtained time-frequency representations were fed into several pre-trained CNN architectures and compared in performance to a new CNN model derived from FISC-Net. The concept of tiny machine learning was employed in learning to enable real-time, low-power auscultation analysis on a portable and cost-efficient device, ensuring immediate feedback and enhanced patient privacy. The results showed that FISC-Netv1 surpassed other pretrained models by achieving a 90% accuracy rate demonstrating the effectiveness of the proposed system. Furthermore, the application of quantization awareness training reduced the learning model size by 4x without significantly compromising its performance.