[1] 唐 立,卿三东,徐志刚,等.自动驾驶公众接受度研究综述[J].交通运输工程学报,2020,20(2):131-146.
TANG Li, QING San-dong, XU Zhi-gang, et al. Research review on public acceptance of autonomous driving[J]. Journal of Traffic and Transportation Engineering, 2020, 20(2): 131-146.(in Chinese)
[2] 缪炳荣,张卫华,刘建新,等.工业4.0下智能铁路前沿技术问题综述[J].交通运输工程学报,2021,21(1):115-131.
MIAO Bing-rong, ZHANG Wei-hua, LIU Jian-xin, et al. Review on frontier technical issues of intelligent railways under industry 4.0 [J]. Journal of Traffic and Transportation Engineering, 2021, 21(1): 115-131.(in Chinese)
[3] 杨 澜,赵祥模,吴国垣,等.智能网联汽车协同生态驾驶策略综述[J].交通运输工程学报,2020,20(5):58-72.
YANG Lan, ZHAO Xiang-mo, WU Guo-yuan, et al. Review on connected and automated vehicles based cooperative eco-driving strategies [J]. Journal of Traffic and Transportation Engineering, 2020, 20(5): 58-72.(in Chinese)
[4] 马永杰,马芸婷,程时升,等.基于改进YOLOv3模型与Deep-SORT算法的道路车辆检测方法[J].交通运输工程学报,2021,21(2):222-231.
MA Yong-jie, MA Yun-ting, CHENG Shi-sheng, et al. Road vehicle detection method based on improved YOLOv3 model and deep-SORT algorithm[J]. Journal of Traffic and Transportation Engineering, 2021,21(2): 222-231.(in Chinese)
[5] ZHANG He, PATEL V M. Density-aware single image de-raining using a multi-stream dense network[C]∥IEEE. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2018: 695-704.
[6] LI Xia, WU Jian-long, LIN Zhou-chen, et al. Recurrent squeeze-and-excitation context aggregation net for single image deraining[C]∥Springer. 2014 European Conference on Computer Vision. Berlin: Springer, 2018: 262-277.
[7] REN D, ZUO W, HU Q, et al. Progressive image deraining networks: A better and simpler baseline[C]∥IEEE. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2019: 3932-3941.
[8] JIN Xin, CHEN Zhi-bo, LI Wei-ping. AI-GAN: asynchronous interactive generative adversarial network for single image rain removal[J]. Pattern Recognition, 2020, 100: 107143.
[9] 柳长源,王 琪,毕晓君.基于多通道多尺度卷积神经网络的单幅图像去雨方法[J].电子与信息学报,2020,42(9):2285-2292.
LIU Chang-yuan, WANG Qi, BI Xiao-jun.Research on rain removal method for single image based on multi-channel and multi-scale CNN [J]. Journal of Electronics and Information Technology, 2020, 42(9): 2285-2292.(in Chinese)
[10] LIN Xiao, MA Li-zhuang, SHENG Bin, et al. Utilizing two-phase processing with FBLS for single image deraining[J]. IEEE Transactions on Multimedia, 2020, 23: 664-676.
[11] PENG Jia-yi, XU Yong, CHEN Tian-yi, et al. Single-image raindrop removal using concurrent channel-spatial attention and long-short skip connections[J]. Pattern Recognition Letters, 2020, 131: 121-127.
[12] SUN Guo-min, LENG Jin-song, CATTANI C. A particular directional multilevel transform based method for single-image rain removal[J]. Knowledge-Based Systems, 2020, 200: 106000.
[13] PENG Long, JIANG Ai-wen, Yi Qiao-si, et al. Cumulative rain density sensing network for single image Derain[J]. IEEE Signal Processing Letters, 2020, 27: 406-410.
[14] BI Xiao-jun, XING Jun-yao. Multi-scale weighted fusion attentive generative adversarial network for single image de-raining[J]. IEEE Access, 2020, 8: 69838-69848.
[15] WANG Hong, WU Yi-chen, XIE Qi, et al. Structural residual learning for single image rain removal[J]. Knowledge-Based Systems, 2021, 213: 106595.
[16] 高 涛,刘梦尼,陈 婷,等.结合暗亮通道先验的远近景融合去雾算法[J].西安交通大学学报,2021,55(10):78-86.
GAO Tao, LIU Meng-ni, CHEN Ting, et al. A far and near scene fusion defogging algorithm based on the prior of dark-light channel[J]. Journal of Xi'an Jiaotong University, 2021, 55(10): 78-86.(in Chinese)
[17] CHEN Ting, LIU Meng-ni, GAO Tao, et al. A fusion-based defogging algorithm[J]. Remote Sensing, 2022, 14(2): 425.
[18] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]∥IEEE. 29th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2016: 779-788.
[19] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]∥IEEE. 30th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2017: 6517-6525.
[20] REDMON J, FARHADI A. YOLOv3: An incremental improvement[J]. arXiv, 2018: 20200100292.
[21] BOCHKOVSKIY A, WANG Chien-yao, LIAO Hong-yuan. YOLOv4: optimal speed and accuracy of object detection[J]. arXiv, 2020: 20200411830.
[22] NING Zhang, MI Zhi-wei. Research on surface defect detection algorithm of strip steel based on improved YOLOv3[J]. Journal of Physics: Conference Series, 2021, 1907(1): 012015.
[23] YU Pei-dong, WANG Xin, LIU Jian-hui, et al. Bridge target detection in remote sensing image based on improved YOLOv4 algorithm[C]∥ACM. 2020 4th International Conference on Computer Science and Artificial Intelligence. New York: ACM, 2020: 139-145.
[24] CHEN Wen-kang, LU Sheng-lian, LIU Bing-hao, et al. Detecting citrus in orchard environment by using improved YOLOv4[J]. Scientific Programming, 2020, 2020: 8859237.
[25] ZHU Qin-feng, ZHENG Hui-feng, WANG Yue-bing, et al. Study on the evaluation method of sound phase cloud maps based on an improved YOLOv4 algorithm[J]. Sensors, 2020, 20(15): 4314.
[26] HU Jie, SHEN Li, ALBANIE S. Squeeze-and-excitation networks[C]∥ IEEE. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2018: 7132-7141.
[27] DENG Jie, DONG Wei, SOCHER R, et al. Imagenet: A large-scale hierarchical image database[C]∥IEEE. 2009 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2009: 248-255.
[28] LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context[C]∥Springer. 13th European Conference on Computer Vision. Berlin: Springer, 2014: 740-755.
[29] ARTHUR D, VASSILVITSKII S. k-means++: the advantages of careful seeding[C]∥ACM. 18th Annual ACM-SIAM Symposium on Discrete Algorithms. New York: ACM, 2007: 1027-1035.
[30] CHOWDHURY K, CHAUDHURI D, PAL A K, et al. Seed selection algorithm through k-means on optimal number of clusters[J]. Multimedia Tools and Applications, 2019, 78(13): 18617-18651.
[31] YAMAMICHI K, HAN Xian-hua. MCGKT-Net: Multi-level context gating knowledge transfer network for single image deraining[C]∥Springer. 15th Asian Conference on Computer Vision. Berlin: Springer, 2020: 1-17.
[32] YANG Wen-han, TAN R T, FENG Jia-shi, et al. Deep joint rain detection and removal from a single image[C]∥IEEE. 30th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2017: 1357-1366.
[33] LI Yu, TAN Robby T, GUO Xiao-jie, et al. Rain streak removal using layer priors[C]∥IEEE. 29th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2016: 2736-2744.
[34] ZHANG He, SINDAGI V, PATEL V M. Image de-raining using a conditional generative adversarial network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(11): 3943-3956.
[35] FU Xue-yang, HUANG Jia-bin, ZENG De-lu, et al. Removing rain from single images via a deep detail network[C]∥IEEE. 30th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2017: 3855-3863.
[36] ZHANG He, PATEL V M. Density-aware single image de-raining using a multi-stream dense network[C]∥IEEE. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2018: 695-704.
[37] GEIGER A, LENZ P, URTASUN R. Are we ready for autonomous driving? the KITTI vision benchmark suite[C]∥IEEE. 25th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2012: 3354-3361.
[38] CORDTS M, OMRAN M, RAMOS S, et al. The cityscapes dataset for semantic urban scene understanding[C]∥IEEE. 29th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2016: 3213-3223.
[39] NEUHOLD G, OLLMANN T, BUL S R, et al. The mapillary vistas dataset for semantic understanding of street scenes[C]∥IEEE. 30th IEEE International Conference on Computer Vision. New York: IEEE, 2017: 4990-4999.
[40] YU Fisher, XIAN Wen-qi, CHEN Ying-ying, et al. BDD100K: a diverse driving video database with scalable annotation tooling[EB/OL].(2020-04-08)[2022-07-02].https:∥arxiv.org/abs/1805.04687v2.
[41] CAESAR H, BANKITI V, LANG A H, et al. nuScenes: a multimodal dataset for autonomous driving[C]∥IEEE. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). New York: IEEE, 2020: 11618-11628.
[42] HUANG Xin-yu, WANG Peng, CHENG Xin-jing, et al. The ApolloScape open dataset for autonomous driving and its application[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(10): 2702-2719.
[43] ZHU Zhe, LIANG Dun, ZHANG Song-hai, et al. Traffic-sign detection and classification in the wild[C]∥IEEE. 29th IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2016: 2110-2118.
[44] STALLKAMP J, SCHLIPSING M, SALMEN J, et al. The German traffic sign recognition benchmark: a multi-class classification competition[C]∥ IEEE. 2011 International Joint Conference on Neural Networks. New York: IEEE, 2011: 1453-1460.