Enhancing GI Cancer Radiation Therapy: Advanced Organ Segmentation with ResECA-U-Net Model

S. M. Nuruzzaman Nobel, Omar Faruque Sifat, Md Rajibul Islam, Md Shohel Sayeed, Md Amiruzzaman

Abstract


This research introduces a pioneering solution to the challenges posed by gastrointestinal tract (GI) cancer in radiation therapy, focusing on the imperative task of precise organ segmentation for minimizing radiation-induced damage. GI imaging has historically used manual demarcation, which is laborious and uncomfortable for patients. We address this by introducing the ResECA-U-Net deep learning model, a novel combination of the U-Net and ResNet34 architectures. Furthermore, we further augment its functionality by incorporating the Efficient Channel Attention (ECA-Net) methodology. By utilizing data from the UW-Madison Carbone Cancer Center, we carefully investigate several image processing techniques designed to capture critical local characteristics. With its foundation in computer vision concepts, the ResECA-U-Net model is excellent at extracting fine details from GI images. Sophisticated metrics such as intersection over union (IoU) and the dice coefficient are used to evaluate performance. Our study's outcomes demonstrate the effectiveness of the suggested method, yielding an impressive 96.27% Dice coefficient and 91.48% IoU. These results highlight the significant contribution that our strategy has made to the advancement of cancer therapy. Beyond its scientific merits, this work has the potential to significantly enhance cancer patients' quality of life and provide better long-term outcomes. Our work is a significant step towards automating and optimizing the segmentation process, which can potentially change how GI cancer is treated completely.

 

Doi: 10.28991/ESJ-2024-08-03-012

Full Text: PDF


Keywords


U-Net; Deep Learning; Transfer Learning; ECA-Net; GI Tract; Computer Vision; Segmentation; Radiation Therapy.

References


Sharma, M. (2022). Automated GI tract segmentation using deep learning. arXiv preprint arXiv:2206.11048, 1-11. doi:10.48550/arXiv.2206.11048.

Elgayar, S., Hamad, S., & El-Horbaty, E.-S. (2023). Revolutionizing Medical Imaging through Deep Learning Techniques: An Overview. International Journal of Intelligent Computing and Information Sciences, 23(3), 59–72. doi:10.21608/ijicis.2023.211266.1274.

Zhang, C., Xu, J., Tang, R., Yang, J., Wang, W., Yu, X., & Shi, S. (2023). Novel research and future prospects of artificial intelligence in cancer diagnosis and treatment. Journal of Hematology & Oncology, 16(1), 114. doi:10.1186/s13045-023-01514-5.

Zhu, N. (2023). Research on Customer Relationship Segmentation of Apparel Retail Industry through Data Mining. HighTech and Innovation Journal, 4(2), 309-314. doi:10.28991/HIJ-2023-04-02-05.

Ali, S., Dmitrieva, M., Ghatwary, N., Bano, S., Polat, G., Temizel, A., ... & Rittscher, J. (2021). Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy. Medical image analysis, 70, 102002. doi:10.1016/j.media.2021.102002.

Mkindu, H., Wu, L., & Zhao, Y. (2023). Lung nodule detection of CT images based on combining 3D-CNN and squeeze-and-excitation networks. Multimedia Tools and Applications, 82(17), 25747-25760. doi:10.1007/s11042-023-14581-0.

Chou, A., Li, W., & Roman, E. (2022). GI tract image segmentation with U-Net and mask R-CNN. Image Segmentation with U-Net and Mask R-CNN. Stanford University, California, United States. Available online: http://cs231n.stanford.edu/reports/2022/pdfs/164. pdf (accessed on May 2024).

Chia, B., Gu, H., & Lui, N. (2022). Gastrointestinal tract segmentation using multi-task learning. CS231n: Deep Learning for Computer Vision Stanford Spring. Stanford University, California, United States. Available online: https://cs231n.stanford.edu/reports/2022/pdfs/75.pdf (accessed on May 2024).

Khan, M. A., Khan, M. A., Ahmed, F., Mittal, M., Goyal, L. M., Jude Hemanth, D., & Satapathy, S. C. (2020). Gastrointestinal diseases segmentation and classification based on duo-deep architectures. Pattern Recognition Letters, 131, 193–204. doi:10.1016/j.patrec.2019.12.024.

Do, H. Q., Nguyen, T. H., Vu, V. V., Hoang, T. M., Nguyen, H. M., & Nguyen, T. T. D. (2021). Evaluation of U-Net and Its Variants in Solving Upper Gastrointestinal Endoscopy Segmentation. 15th International Conference on Advanced Computing and Applications (ACOMP), 70-77. doi:Ho Chi Minh City, Vietnam.

Wang, S., Cong, Y., Zhu, H., Chen, X., Qu, L., Fan, H., Zhang, Q., & Liu, M. (2021). Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation with Endoscopy Images of Gastrointestinal Tract. IEEE Journal of Biomedical and Health Informatics, 25(2), 514–525. doi:10.1109/JBHI.2020.2997760.

Sharma, N., Gupta, S., Koundal, D., Alyami, S., Alshahrani, H., Asiri, Y., & Shaikh, A. (2023). U-Net Model with Transfer Learning Model as a Backbone for Segmentation of Gastrointestinal Tract. Bioengineering, 10(1), 119. doi:10.3390/bioengineering10010119.

Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, 234–241. doi:10.1007/978-3-319-24574-4_28.

Holzmann, G. J. (1997). State compression in SPIN: Recursive indexing and compression training runs. Proceedings of third international Spin workshop, University of Twente, Enschede, The Netherlands, 1-10.

Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., & Hu, Q. (2020). ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, United States. doi:10.1109/cvpr42600.2020.01155.

Bian, J., & Liu, Y. (2020). Dual Channel Attention Networks. Journal of Physics: Conference Series, 1642(1), 12004. doi:10.1088/1742-6596/1642/1/012004.

García-Pedrajas, N., Hervás-Martínez, C., & Ortiz-Boyer, D. (2005). Cooperative coevolution of artificial neural network ensembles for pattern classification. IEEE Transactions on Evolutionary Computation, 9(3), 271–302. doi:10.1109/TEVC.2005.844158.

Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., ... & Hadsell, R. (2016). Progressive neural networks. arXiv preprint, arXiv:1606.04671. doi:10.48550/arXiv.1606.04671

Kirst, C., Skriabine, S., Vieites-Prado, A., Topilko, T., Bertin, P., Gerschenfeld, G., Verny, F., Topilko, P., Michalski, N., Tessier-Lavigne, M., & Renier, N. (2020). Mapping the Fine-Scale Organization and Plasticity of the Brain Vasculature. Cell, 180(4), 780-795.e25. doi:10.1016/j.cell.2020.01.028.

Pacheco, M., Oliva, G., Rajbahadur, G. K., & Hassan, A. (2023). Is My Transaction Done Yet? An Empirical Study of Transaction Processing Times in the Ethereum Blockchain Platform. ACM Transactions on Software Engineering and Methodology, 32(3), 1-46. doi:10.1145/3549542.

Mashhadi, P. S., Nowaczyk, S., & Pashami, S. (2021). Parallel orthogonal deep neural network. Neural Networks, 140, 167–183. doi:10.1016/j.neunet.2021.03.002.

Hu, L., Cao, J., Xu, G., Cao, L., Gu, Z., & Zhu, C. (2013). Personalized recommendation via cross-domain triadic factorization. Proceedings of the 22nd International Conference on World Wide Web, 595 - 606. doi:10.1145/2488388.2488441.

Fernández-Martínez, F., Luna-Jiménez, C., Kleinlein, R., Griol, D., Callejas, Z., & Montero, J. M. (2022). Fine-Tuning BERT Models for Intent Recognition Using a Frequency Cut-Off Strategy for Domain-Specific Vocabulary Extension. Applied Sciences, 12(3), 1610. doi:10.3390/app12031610.

Georgescu, M.-I., Ionescu, R. T., & Miron, A. I. (2023). Diversity-Promoting Ensemble for Medical Image Segmentation. Proceedings of the 38th ACM/SIGAPP Symposium on Applied Computing, 599 - 606. doi:10.1145/3555776.3577682.

Jiang, W., Chen, Y., Wen, S., Zheng, L., & Jin, H. (2023). PDAS: Improving network pruning based on Progressive Differentiable Architecture Search for DNNs. Future Generation Computer Systems, 146, 98–113. doi:10.1016/j.future.2023.04.011.

Mavaddati, S. (2024). Voice-based Age, Gender, and Language Recognition Based on ResNet Deep model and Transfer learning in Spectro-Temporal Domain. Neurocomputing, 127429. doi:10.1016/j.neucom.2024.127429.

Sai, M. J., & Punn, N. S. (2023). LWU-Net approach for Efficient Gastro-Intestinal Tract Image Segmentation in Resource-Constrained Environments. doi:10.1101/2023.12.05.23299425.

Guggari, S., Srivastava, B. C., Kumar, V., Harshita, H., Farande, V., Kulkarni, U., & Meena, S. M. (2023). RU-Net: A Novel Approach for Gastro-Intestinal Tract Image Segmentation Using Convolutional Neural Network. Communications in Computer and Information Science, 1818 CCIS, 131–141. doi:10.1007/978-3-031-34222-6_11.

Zhou, H., Lou, Y., Xiong, J., Wang, Y., & Liu, Y. (2023). Improvement of Deep Learning Model for Gastrointestinal Tract Segmentation Surgery. Frontiers in Computing and Intelligent Systems, 6(1), 103–106. doi:10.54097/fcis.v6i1.19.

Sharma, N., Gupta, S., Rajab, A., Elmagzoub, M. A., Rajab, K., & Shaikh, A. (2023). Semantic Segmentation of Gastrointestinal Tract in MRI Scans Using PSPNet Model with ResNet34 Feature Encoding Network. IEEE Access, 11, 132532–132543. doi:10.1109/ACCESS.2023.3336862.


Full Text: PDF

DOI: 10.28991/ESJ-2024-08-03-012

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 SM Nuruzzaman Nobel, Omar Faruque Sifat, Md. Rajibul Islam, Md Shohel Sayeed, Md Amiruzzaman