Laboratory Investigation, cilt.104, sa.10, 2024 (SCI-Expanded)
In digital pathology, accurate mitosis detection in histopathological images is critical for cancer diagnosis and prognosis. However, this remains challenging due to the inherent variability in cell morphology and the domain shift problem. This study introduces ConvNext Mitosis Identification-You Only Look Once (CNMI-YOLO), a new 2-stage deep learning method that uses the YOLOv7 architecture for cell detection and the ConvNeXt architecture for cell classification. The goal is to improve the identification of mitosis in different types of cancers. We utilized the Mitosis Domain Generalization Challenge 2022 data set in the experiments to ensure the model's robustness and success across various scanners, species, and cancer types. The CNMI-YOLO model demonstrates superior performance in accurately detecting mitotic cells, significantly outperforming existing models in terms of precision, recall, and F1 score. The CNMI-YOLO model achieved an F1 score of 0.795 on the Mitosis Domain Generalization Challenge 2022 and demonstrated robust generalization with F1 scores of 0.783 and 0.759 on the external melanoma and sarcoma test sets, respectively. Additionally, the study included ablation studies to evaluate various object detection and classification models, such as Faster-RCNN and Swin Transformer. Furthermore, we assessed the model's robustness performance on unseen data, confirming its ability to generalize and its potential for real-world use in digital pathology, using soft tissue sarcoma and melanoma samples not included in the training data set.