1,139 | 7 | 135 |
下载次数 | 被引频次 | 阅读次数 |
目前无人机检测技术应用广泛,但无人机在执行目标检测任务中可能遭遇多种空中障碍物,这些目标具有成像小、像素特征少和相对速度变化快等检测难点,针对此类目标引起的漏检误检问题,提出了一种基于改进YOLOv7算法的低空飞行物目标检测算法。在传统YOLOv7算法的基础上,在Head网络引入SimAM注意力机制,该机制与现有的通道和空间注意力模块相比,同时考虑空间和通道维度信息,且不在原始网络中添加额外参数;在主干网络中结合ConvNeXt网络,提出CvNX模块,降低网络计算量,并保留目标特征;用SIoU-Loss代替原有的CIoU-Loss,提高算法收敛速度;在图像后处理阶段使用SIoU-NMS,减少遮挡导致的目标漏检。在自有低空飞行物数据集上实验结果表明,改进的YOLOv7算法平均精度(Average Precision, AP)达到97.1%,相比YOLOv7算法,平均精度均值(mean Average Precision, mAP)提高了1.7%,且误检、漏检率低,达到了在复杂背景下检测低空飞行物目标的要求。
Abstract:At present, drone detection technology is widely used, but drones may encounter various aerial obstacles during target detection tasks. These targets have detection difficulties such as small imaging, few pixel features, and fast relative speed changes. To address the problem of missed detection and false detection caused by such targets, a low altitude flying object target detection algorithm based on the improved YOLOv7 algorithm is proposed. Based on the traditional YOLOv7 algorithm, the SimAM attention mechanism is firstly introduced into the Head network. Compared with existing channel and spatial attention modules, this mechanism considers both spatial and channel dimension information, and does not add additional parameters to the original network; combined with the ConvNeXt network in the backbone network, a CvNX module is proposed to reduce the network computing load and retain the target characteristics; SIoU-Loss is then used to replace the original CIoU-Loss to improve the convergence speed of the algorithm; finally, in the image post processing stage, SIoU-NMS is used to reduce the missed detection of targets caused by occlusion. The experimental results on a self-owned low altitude flying object dataset show that the Average Precision(AP) of the improved YOLOv7 algorithm reaches 97.1%, which improves the mean Average Precision(mAP) by 1.7% compared to YOLOv7 algorithm, and the improved YOLOv7 algorithm has low false detection and missed detection rates, meeting the requirements for detecting low altitude flying targets in complex backgrounds.
[1] DEEPAN P,SUDHA L R.Effective Utilization of YOLOv3 Model for Aircraft Detection in Remotely Sensed Images[J/OL].Materials Today Proceedings:1-4[2023-04-16].https://doi.org/10.1016/j.matpr.2021.02.831.
[2] TAN L,L V X Y,LIAN X F,et al.YOLOv4_Drone:UAV Image Target Detection Based on an Improved YOLOv4 Algorithm[J].Computers & Electrical Engineering,2021,93:107261.
[3] KUMAR A,KALIA A,KALIA A.ETL-YOLO v4:A Face Mask Detection Algorithm in Era of COVID-19 Pandemic[J].Optik,2022,259:169051.
[4] WANG Q F,CHENG M,HUANG S,et al.A Deep Learning Approach Incorporating YOLO v5 and Attention Mechanisms for Field Real-time Detection of the Invasive Weed Solanum Rostratum Dunal Seedlings[J].Computers and Electronics in Agriculture,2022,199:107194.
[5] BIE M L,LIU Y Y,LI G N,et al.Real-time Vehicle Detection Algorithm Based on a Lightweight You-Only-Look-Once (YOLOv5n-L) Approach[J].Expert Systems with Applications,2023,213(B):119108.
[6] 赵玥萌,刘会刚.基于优化YOLOv4算法的低空无人机检测与跟踪[J].激光与光电子学进展,2022,59(12):397-406.
[7] 刘闪亮,吴仁彪,屈景怡,等.基于A-YOLOv5s的机场小目标检测方法[J].安全与环境学报,2023,23(8):2742-2749.
[8] WANG C Y,BOCHKOVSKIY A,LIAO H Y M.YOLOv7:Trainable Bag-of-freebies Sets New State-of-the-art for Real-time Object Detectors[EB/OL].(2022-07-06)[2023-04-16].https://arxiv.org/abs/2207.02696.
[9] GIRSHICK R,DONAHUE J,DARRELL T,et al.Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition.Columbus:IEEE,2014:580-587.
[10] GIRSHICK R.Fast R-CNN[C]//2015 IEEE International Conference on Computer Vision.Santiago:IEEE,2015:1440-1448.
[11] REN S Q,HE K M,GIRSHICK R,et al.Faster R-CNN:Towards Real-time Object Detection with Region Proposal Networks[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2015;39(6):1137-1149.
[12] LIU W,ANGUELOV D,ERHAN D,et al.SSD:Single Shot Multibox Detector[C]//Computer Vision-ECCV 2016:14th European Conference.Amsterdam:Springer,2016:21-37.
[13] REDMON J,FARHADI A.YOLOv3:An Incremental Improvement[EB/OL].(2018-04-08)[2023-04-16].https://arxiv.org/abs/1804.02767.
[14] BOCHKOVSKIY A,WANG C Y,LIAO H Y M.YOLOv4:Optimal Speed and Accuracy of Object Detection[EB/OL].(2020-04-23)[2023-04-16].https://arxiv.org/abs/2004.10934.
[15] LIU Z,MAO H Z,WU C Y,et al.A Convnet for the 2020s[C]//2022 the IEEE/CVF Conference on Computer Vision and Pattern Recognition.New Orleans:IEEE,2022:11976-11986.
[16] CHOLLET F.Xception:Deep Learning with Depthwise Separable Convolutions[C]//2017 IEEE conference on Computer Vision and Pattern Recognition.Honolulu:IEEE,2017:1251-1258.
[17] YANG L X,ZHANG R Y,LI L D,et al.SimAM:A Simple,Parameter-free Attention Module for Convolutional Neural Networks[C]//International Conference on Machine Learning.[S.l.]:PMLR,2021:11863-11874.
[18] ZHENG Z J,WANG P,LIU W,et al.Distance-IoU Loss:Faster and Better Learning for Bounding Box Regression[C]//Proceedings of the AAAI Conference on Artificial Intelligence.New York:AAAI,2020:12993-13000.
[19] GEVORGYAN Z.SIoU Loss:More Powerful Learning for Bounding Box Regression[EB/OL].(2022-05-25)[2023-04-16].https://arxiv.org/abs/2205.12740.
[20] HE Y H,ZHU C C,WANG J R,et al.Bounding Box Regression with Uncertainty for Accurate Object Detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition.Long Beach:IEEE,2019:2888-2897.
[21] MA N N,ZHANG X Y,ZHENG H T,et al.ShuffleNet V2:Practical Guidelines for Efficient CNN Architecture Design[C]//Proceedings of the European Conference on Com-puter Vision (ECCV).Munich:IEEE,2018:116-131.
基本信息:
DOI:
中图分类号:TP391.41
引用信息:
[1]甄然,刘雨涵,孟凡华等.基于改进YOLOv7的低空飞行物目标检测方法[J].无线电工程,2024,54(03):633-643.
基金信息:
国家自然科学基金(62003129)~~