| 250 | 0 | 44 |
| 下载次数 | 被引频次 | 阅读次数 |
针对骨折检测场景中骨折点定位困难和检测准确率低的问题,提出了一种结合混合注意力机制的轻量化骨折目标检测模型,命名为FD-YOLO(Fracture Detection YOLO)。首先,设计分组可分离卷积DepthwiseGroupConv和分组可扩展残差DWGAResidual模块,用其构建设计轻量高效的FGELAN模块,替换了YOLOv8n主干网络中的C2f模块,提高了主干网络的特征提取和通道调整能力,同时降低模型复杂度。其次,将SPPELAN替换原始YOLOv8n的SPPF作为新的空间池化模块,扩大感受野的同时使得模型更加轻量高效。再次,设计SELA通道空间混合注意力机制,聚焦于特征的通道和空间表达能力,增强了对关键特征的捕捉能力。最后,设计特征提取模块FDB,并构建C2fFDB重塑颈部网络,提高颈部网络的特征处理能力。实验结果表明,在公共数据集GRAZPEDWRI-DX上相较于基准算法,mAP50和mAP50-95指标分别提升了3.4百分点和1.9百分点,并且参数量和Flops值分别下降了0.2 G和3.6%。与其他主流目标检测算法相比,FD-YOLO更加契合于骨折检测场景下的需求。
Abstract:Aiming at the challenges of fracture point localization and low detection accuracy in fracture detection scenarios, we propose a lightweight fracture detection model incorporating a hybrid attention mechanism, named FD-YOLO(Fracture Detection YOLO). Firstly, we design DepthwiseGroupConv(a groupwise separable convolution) and DWGAResidual(a groupwise expandable residual module) to construct the lightweight and efficient FGELAN module, which replaces the C2f module in the YOLOv8n backbone network. This enhances the feature extraction and channel adjustment capabilities of the backbone network while reducing model complexity. Subsequently, SPPELAN is used to replace the original SPPF module in YOLOv8n as the new spatial pooling module, which expands the receptive field and further lightens the model. Then, the SELA hybrid attention mechanism is designed to focus on the channel and spatial expression capabilities of features, thereby enhancing the ability to capture key features. Finally, the feature extraction module FDB is designed, and the C2fFDB is constructed to reshape the neck network, thereby improving its feature processing capabilities. Experimental results demonstrate that on the public dataset GRAZPEDWRI-DX,compared with the baseline algorithm, the mAP50 and mAP50-95 metrics are increased by 3.4 percentage points and 1.9 percentage points, respectively, while the number of parameters and Flops are reduced by 0.2 G and 3.6%,respectively. Compared with other mainstream object detection algorithms, FD-YOLO is more suitable for the needs of fracture detection scenarios.
[1] LITJENS G,KOOI T,BEJNORDI B E,et al.A survey on deep learning in medical image analysis[J].Medical Image Analysis,2017,42:60-88.
[2] LECUN Y,BENGIO Y,HINTON G.Deep learning[J].Nature,2015,521(7553):436-444.
[3] SCHMIDHUBER J.Deep learning in neural networks:an overview[J].Neural Networks,2015,61:85-117.
[4] MNIH V,KAVUKCUOGLU K,SILVER D,et al.Human-level control through deep reinforcement learning[J].Nature,2015,518(7540):529-533.
[5] HAFIZ A M,BHAT G M.A survey of deep learning techniques for medical diagnosis[C]//Information and communication technology for sustainable development.Singapore:Springer,2020:161-170.
[6] REMON J,DIVVALA S,GIRSHICK R,et al.You only look once:unified,real-time object detection[C]//Proceedings of the 2016 IEEE conference on computer vision and pattern recognition.Las Vegas:IEEE,2016:779-788.
[7] 吴亿明.基于深度学习的肋骨骨折检测算法研究[D].南昌:南昌大学,2024.
[8] VARGHESE R,SAMBATH M.YOLOv8:a novel object detection algorithm with enhanced performance and robustness[C]//2024 international conference on advances in data engineering and intelligent computing systems (ADICS).Chennai:IEEE,2024:1-6.
[9] CHIEN C T,JU R Y,CHOU K Y,et al.Yolov8-am:Yolov8 with attention mechanisms for pediatric wrist fracture detection[J].arXiv:2402.09329,2024.
[10] KANG M,TING C M,TING F F,et al.RCS-YOLO:a fast and high-accuracy object detector for brain tumor detection[C]//International conference on medical image computing and computer-assisted intervention.Cham:Springer,2023:600-610.
[11] CHEN Z,LU S.Caf-yolo:a robust framework for multi-scale lesion detection in biomedical imagery[J].arXiv:2408.01897,2024.
[12] 赵龙阳,李天豪,张会兵,等.MDSD-YOLO:一种复杂道路场景目标检测方法[J/OL].计算机技术与发展,2025.https://doi.org/10.20165/j.cnki.ISSN1673-629X.2025.0113.
[13] 彭志博,陈勇,崔艳荣.基于YOLOv8m的改进腕部X光片骨折检测算法[J].中国医学物理学杂志,2025,42(4):542-549.
[14] 王舒梦,徐慧英,朱信忠,等.基于改进YOLOv8n航拍轻量化小目标检测算法:PECS-YOLO[J/OL].计算机工程,2025.https://doi.org/10.19678/j.issn.1000-3428.0069353.
[15] SIFRE L,MALLAT S.Rigid-motion scattering for texture classification[J].arXiv:1403.1687,2014.
[16] WANG C Y,YEH I H,LIAO H Y M.Yolov9:learning what you want to learn using programmable gradient information[J].arXiv:2402.13616,2024.
[17] HU J,SHEN L,SUN G.Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition.Salt Lake City:IEEE,2018:7132-7141.
[18] XU W,WAN Y.ELA:efficient local attention for deep convolutional neural networks[J].arXiv:2403.01123,2024.
[19] NAGY E.A pediatric wrist trauma X-ray dataset (Grazped- wri-dx) for machine learning[J].Scientific Data,2022,9(1):1-10.
[20] IFTEKHARUL A,ASHIQUR M R,ZOHRA F P,et al.FracAtlas:a dataset for fracture classification,localization and segmentation of musculoskeletal radiographs[J].Scientific Data,2023,10(1):521.
[21] XU B,CHEN M,GUAN W,et al.Efficient teacher:semisupervised object detection for YOLOv5[J].arXiv:2302.07577,2023.
[22] TIAN G,LIU J,ZHAO H,et al.Small object detection via dual inspection mechanism for UAV visual images[J].Applied Intelligence,2022,52(4):4244-4257.
[23] JIN M,ZHANG J.Research on microscale vehicle logo detection based on real-time DEtection TRansformer (RT-DETR)[J].Sensors (Basel,Switzerland),2024,24(21):6987.
[24] 周孟然,徐君阳,卞凯,等.基于YOLOv8-CGS的肺结节CT图像检测[J/OL].重庆工商大学学报(自然科学版),2025.
[25] KANG M,TING C M.Bgf-yolo:enhanced yolov8 with multiscale attentional feature fusion for brain tumor detection[C]//International conference on medical image computing and computer-assisted intervention.Cham:Springer,2024:35-45.
[26] QIN D,LEICHNER C,DELAKIS M,et al.MobileNetV4:universal models for the mobile ecosystem[C]//European conference on computer vision.Cham:Springer,2024:78-96.
[27] MA N,ZHANG X,ZHENG H T,et al.Shufflenet v2:practical guidelines for efficient cnn architecture design[C]//Proceedings of the European conference on computer vision (ECCV).Munich:Springer,2018:116-131.
基本信息:
DOI:10.20165/j.cnki.ISSN1673-629X.2025.0194
中图分类号:R683;TP391.41
引用信息:
[1]方跃,蒋瑜,龚渝涵.基于混合注意力机制的轻量化骨折检测算法研究[J].计算机技术与发展,2026,36(01):46-54+139.DOI:10.20165/j.cnki.ISSN1673-629X.2025.0194.
基金信息:
国家社会科学基金一般项目(22BXW048)
2025-07-03
2025-07-03
2025-07-03