Two-Stage Approach for Semantic Image Segmentation of Breast Cancer : Deep Learning and Mass Detection in Mammographic images

被引:0
|
作者
Touazi, Faycal [1 ]
Gaceb, Djamel [1 ]
Chirane, Marouane [1 ]
Herzallah, Selma [1 ]
机构
[1] Univ Mhamed Bougara, Dept Comp Sci, LIMOSE Lab, Independence Ave, Boumerdes 35000, Algeria
关键词
Breast Cancer; Deep Learning; ViT; NEST; YOLO;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Breast cancer is a significant global health problem that predominantly affects women and requires effective screening methods. Mammography, the primary screening approach, presents challenges such as radiologist workload and associated costs. Recent advances in deep learning hold promise for improving breast cancer diagnosis. This paper focuses on early breast cancer detection using deep learning to assist radiologists, reduce their workload and costs. We employed the CBIS-DDSM dataset and various CNN models, including YOLO versions V5, V7, and V8 for mass detection, and transformer-based (nested) models inspired by ViT for mass segmentation. Our diverse approach aims to address the complexity of breast cancer detection and segmentation from medical images. Our results show promise, with a 59% mAP50 for cancer mass detection and an impressive 90.15% Dice coefficient for semantic segmentation. These findings highlight the potential of deep learning to enhance breast cancer diagnosis, paving the way for more efficient and accurate early detection methods.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] A two-stage deep learning framework for counterfeit luxury handbag detection in logo images
    Peng, Jianbiao
    Zou, Beiji
    Zhu, Chengzhang
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 1439 - 1448
  • [22] A Two-Stage Approach for Bag Detection in Pedestrian Images
    Du, Yuning
    Ai, Haizhou
    Lao, Shihong
    COMPUTER VISION - ACCV 2014, PT IV, 2015, 9006 : 507 - 521
  • [23] A two-stage deep learning framework for counterfeit luxury handbag detection in logo images
    Jianbiao Peng
    Beiji Zou
    Chengzhang Zhu
    Signal, Image and Video Processing, 2023, 17 : 1439 - 1448
  • [24] Rainy day image semantic segmentation based on two-stage progressive network
    Zhang, Heng
    Jia, Dongli
    Ma, Hui
    VISUAL COMPUTER, 2024, 40 (12): : 8945 - 8956
  • [25] Tissue Segmentation in Nasopharyngeal CT Images Using Two-Stage Learning
    Luo, Yong
    Li, Xiaojie
    Luo, Chao
    Wang, Feng
    Wu, Xi
    Mumtaz, Imran
    Yi, Cheng
    CMC-COMPUTERS MATERIALS & CONTINUA, 2020, 65 (02): : 1771 - 1780
  • [26] A Two-Stage Multiple Instance Learning Framework for the Detection of Breast Cancer in Mammograms
    Chandra, Sarath K.
    Chakravarty, Arunava
    Ghosh, Nirmalya
    Sarkar, Tandra
    Sethuraman, Ramanathan
    Sheet, Debdoot
    42ND ANNUAL INTERNATIONAL CONFERENCES OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY: ENABLING INNOVATIVE TECHNOLOGIES FOR GLOBAL HEALTHCARE EMBC'20, 2020, : 1128 - 1131
  • [27] A Two-Stage Lightweight Deep Learning Framework for Mass Detection and Segmentation in Mammograms Using YOLOv5 and Depthwise SegNet
    Manolakis, Dimitris
    Bizopoulos, Paschalis
    Lalas, Antonios
    Votis, Konstantinos
    JOURNAL OF IMAGING INFORMATICS IN MEDICINE, 2025,
  • [28] Automated breast cancer segmentation and classification in mammogram images using deep learning approach
    Dhanalaxmi, B.
    Venkatesh, N.
    Raju, Yeligeti
    Naik, G. Jagan
    Rao, Channapragada Rama Seshagiri
    Tulasi, V. Prema
    INTERNATIONAL JOURNAL OF BIOMEDICAL ENGINEERING AND TECHNOLOGY, 2025, 47 (02)
  • [29] Two-Stage Approach to Image Classification by Deep Neural Networks
    Ososkov, Gennady
    Goncharov, Pavel
    MATHEMATICAL MODELING AND COMPUTATIONAL PHYSICS 2017 (MMCP 2017), 2018, 173
  • [30] Two-Stage Deep Learning Model for Adrenal Nodule Detection on CT Images: A Retrospective Study
    Ahn, Chang Ho
    Kim, Taewoo
    Jo, Kyungmin
    Park, Seung Shin
    Kim, Min Joo
    Yoon, Ji Won
    Kim, Taek Min
    Kim, Sang Youn
    Kim, Jung Hee
    Choo, Jaegul
    RADIOLOGY, 2025, 314 (03)