CANCERS, vol.14, no.19, 2022 (SCI-Expanded)
Simple Summary Microscopy is central to many areas of biomedical science research, including cancer research, and is critical for understanding basic pathophysiology, mechanisms of action, and treatment response. However, analysis of the numerous images generated from microscopy readouts is usually performed manually, a process that is tedious and time-consuming. Moreover, manual analysis of microscopy images may limit both accuracy and reproducibility. Here, we used an artificial intelligence approach to analyze tunnelling nanotubes (TNTs), a feature of cancer cells that may contribute to their aggressiveness, but which are hard to identify and count. Our approach labeled and detected TNTs and cancer cells from microscopy images and generated TNT-to-cell ratios comparable to those of human experts. Continued refinement of this process will provide a new approach to the analysis of TNTs. Additionally, this approach has the potential to enhance drug screens intended to assess therapeutic efficacy of experimental agents and to reproducibly assess TNTs as a potential biomarker of response to cancer therapy. Background: Tunneling nanotubes (TNTs) are cellular structures connecting cell membranes and mediating intercellular communication. TNTs are manually identified and counted by a trained investigator; however, this process is time-intensive. We therefore sought to develop an automated approach for quantitative analysis of TNTs. Methods: We used a convolutional neural network (U-Net) deep learning model to segment phase contrast microscopy images of both cancer and non-cancer cells. Our method was composed of preprocessing and model development. We developed a new preprocessing method to label TNTs on a pixel-wise basis. Two sequential models were employed to detect TNTs. First, we identified the regions of images with TNTs by implementing a classification algorithm. Second, we fed parts of the image classified as TNT-containing into a modified U-Net model to estimate TNTs on a pixel-wise basis. Results: The algorithm detected 49.9% of human expert-identified TNTs, counted TNTs, and calculated the number of TNTs per cell, or TNT-to-cell ratio (TCR); it detected TNTs that were not originally detected by the experts. The model had 0.41 precision, 0.26 recall, and 0.32 f-1 score on a test dataset. The predicted and true TCRs were not significantly different across the training and test datasets (p = 0.78). Conclusions: Our automated approach labeled and detected TNTs and cells imaged in culture, resulting in comparable TCRs to those determined by human experts. Future studies will aim to improve on the accuracy, precision, and recall of the algorithm.