|Table of Contents|

Traffic sign recognition method based on graphical model and convolutional neural network(PDF)

《交通运输工程学报》[ISSN:1671-1637/CN:61-1369/U]

Issue:
2016年05期
Page:
122-131
Research Field:
交通信息工程及控制
Publishing date:

Info

Title:
Traffic sign recognition method based on graphical model and convolutional neural network
Author(s):
LIU Zhan-wen1 ZHAO Xiang-mo1 LI Qiang1 SHEN Chao1 WANG Jiao-jiao2
1. School of Information Engineering, Chang’an University, Xi’an 710064, Shaanxi, China; 2. School of Electronic and Control Engineering, Chang’an University, Xi’an 710064, Shaanxi, China
Keywords:
traffic control traffic sign saliency detection convolutional neural network pre-training strategy
PACS:
U491.52
DOI:
-
Abstract:
In order to improve the recognition robustness of traffic sign, a recognition method of traffic sign based on graphical model and convolutional neural network(CNN)was proposed, and a application-oriented recognition system of traffic sign based on the regions with convolutional neural network(R-CNN)was set up. A graphical model based on UCM superpixel region was constructed, the multi-scales information from bottom to up was used efficiently, and a hierarchical saliency detection method based on graphical model was proposed to extract the interest regions of traffic sign. The candidate interest regions were processed for the feature extraction and classification of traffic sign with CNN. Detecting result indicates that aiming at the speed-limited signs, the graphical model based on UCM superpixel can get more larger-scale construction information of upper saliency map contrasting to the graphical model based on simple linear iterative cluster(SLIC)superpixel. Because the hierarchical saliency detection model based on the prior location restriction and the local properties combines the detail information of local regions and the constructional information of whole image, the detected results are more precise, and the detected targets are more complete and homogeneous. The detecting precision of detection model is 0.65, the recall ratio is 0.8, F index is 0.73, and the indexes are higher than the indexes of other saliency detection methods based on superpixel. The CNN pre-training strategy for the specific detection task expends the data base of German traffic sign recognition benchmark(GTSRB), and fully uses the learning skills of CNN to learn the local fine detail features of object, so the recognition precision of CNN rises, and the recognition rate of CNN is 98.85% beyond the rate of SVM with 95.73%. 19 figs, 31 refs.

References:

[1] 隽志才,曹 鹏,吴文静.基于认知心理学的驾驶员交通标志视认性理论分析[J].中国安全科学学报,2005,15(8):8-11.JUAN Zhi-cai, CAO Peng, WU Wen-jing. Study on driver traffic signs comprehension based on cognitive psychology[J]. China Safety Science Journal, 2005, 15(8): 8-11.(in Chinese)
[2] LIU Han, LIU Ding, LI Qi. Real-time recognition of road traffic sign in moving scene image using genetic algorithm[C]∥IEEE. Proceedings of the 4th World Congress on Intelligent Control and Automation. New York: IEEE, 2002: 1027-1030.
[3] VáZQUEZ-REINA A, LAFUENTE-ARROYO S, SIEGMANN P, et al. Traffic sign shape classification based on correlation techniques[C]∥WSEAS. Proceedings of the 5th WSEAS International Conference on Signal Processing, Computational Geometry and Artificial Vision. Stevens Point: WSEAS, 2005: 149-154.
[4] LAFUENTE-ARROYO S, SALCEDO-SANZ S, MALDONADO-BASCóN S, et al. A decision support system for the automatic management of keep-clear signs based on support vector machines and geographic information systems[J]. Expert Systems with Applications, 2010, 37(1): 767-773.
[5] OVERETT G, PETERSSON L. Large scale sign detection using HOG feature variants[C]∥IEEE. 2011 IEEE Intelligent Vehicles Symposium(IV). New York: IEEE, 2011: 326-331.
[6] WANG Gang-yi, REN Guang-hui, WU Zhi-lu, et al. A hierarchical method for traffic sign classification with support vector machines[C]∥IEEE. The 2013 International Joint Conference on Neural Networks. New York: IEEE, 2013: 1-6.
[7] SALTI S, PETRELLI A, TOMBARI F, et al. A traffic sign detection pipeline based on interest region extraction[C]∥IEEE. The 2013 International Joint Conference on Neural Networks. New York: IEEE, 2013: 1-7.
[8] XIE Yuan, LIU Li-feng, LI Cui-hua, et al. Unifying visual saliency with HOG feature learning for traffic sign detection[C]∥IEEE. 2009 IEEE Intelligent Vehicles Symposium. New York: IEEE, 2009: 24-29.
[9] YAN Qiong, XU Li, SHI Jian-ping, et al. Hierarchical saliency detection[C]∥IEEE. 2013 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2013: 1155-1162.
[10] WEI Yi-chen, WEN Fang, ZHU Wang-jiang, et al. Geodesic saliency using background priors[C]∥FITZGIBBON A, LAZEBNIK S, PERONA P, et al. 12th European Conference on Computer Vision. Berlin: Springer, 2012: 29-42.
[11] PERAZZI F, KR?HENBüHL P, PRITCH Y, et al. Saliency filters: contrast based filtering for salient region detection[C]∥IEEE. 2012 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2012: 733-740.
[12] YANG Chuan, ZHANG Li-he, LU Hu-chuan, et al. Saliency detection via graph-based manifold ranking[C]∥IEEE. 2013 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2013: 3166-3173.
[13] YUAN Xue, GUO Jia-qi, HAO Xiao-li, et al. Traffic sign detection via graph-based ranking and segmentation algorithms[J]. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2015, 45(12): 1509-1521.
[14] SERMANET P, CHINTALA S, LECUN Y. Convolutional neural networks applied to house numbers digit classification[C]∥IEEE. 21st International Conference on Pattern Recognition. New York: IEEE, 2012: 3288-3291.
[15] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]∥PEREIRA F, BURGES C J C, BOTTOU L, et al. Advances in Neural Information Processing Systems 25. South Lake Tahoe: NIPS Foundation, 2012: 1097-1105.
[16] SERMANET P, LECUN Y.Traffic sign recognition with multi-scale convolutional networks[C]∥IEEE. The 2011 International Joint Conference on Neural Networks. New York: IEEE, 2011: 2809-2813.
[17] WU Yi-hui, LIU Yu-long, LI Jian-min, et al. Traffic sign detection based on convolutional neural networks[C]∥IEEE. The 2013 International Joint Conference on Neural Networks. New York: IEEE, 2013: 1-7.
[18] JIA Yang-qing, SHELHAMER E, DONAHUE J, et al. Caffe: convolutional architecture for fast feature embedding[C]∥ACM. Proceedings of the 22nd ACM International Conference on Multimedia. New York: ACM, 2014: 675-678.
[19] SZEGEDY C, LIU Wei, JIA Yang-qing, et al. Going deeper with convolutions[C]∥IEEE. 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2015: 1-9.
[20] ZEILER M D, FERGUS R. Visualizing and understanding convolutional networks[C]∥FLEET D, PAJDLA T, SCHIELE B, et al. 13th European Conference on Computer Vision. Berlin: Springer, 2014: 818-833.
[21] BESAG J. Spatial interaction and the statisticalanalysis of lattice systems[J]. Journal of the Royal Statistical Society. Series B: Methodological, 1974, 36(2): 192-236.
[22] LAFFERTY J, M CALLUM A, PEREIRA F. Conditional random fields: probabilistic models for segmenting and labeling sequence data[C]∥ACM. Proceedings of the 18th International Conference on Machine Learning. New York: ACM, 2001: 282-289.
[23] YEDIDIA J S, FREEMAN W T, WEISS Y. Generalized belief propagation[C]∥LEEN T K, DIETTERICH T G, TRESP V. Advances in Neural Information Processing Systems 13. Denver: NIPS Foundation, 2000: 689-695.
[24] HOUBEN S, STALLKAMP J, SALMEN J, et al. Detection of traffic signs in real-world images: the German traffic sign detection benchmark[C]∥IEEE. The 2013 International Joint Conference on Neural Networks. New York: IEEE, 2013: 1-8.
[25] STSLLKAMP J, SCHLIPSING M, SALMEN J, et al. The German traffic sign recognition benchmark: a multi-class classification competition[C]∥IEEE. The 2011 International Joint Conference on Neural Networks. New York: IEEE, 2011: 1453-1460.
[26] WEN Cheng-lu, LI J, LUO Huan, et al. Spatial-related traffic sign inspection for inventory purposes using mobile laser scanning data[J]. IEEE Transactions on Intelligent Transactions Systems, 2016, 17(1): 27-37.
[27] ACHANTA R, SHAJI A, SMITH K, et al. SLIC superpixels[R]. Lausanne: école Polytechnique Féderale de Lausanne, 2010.
[28] MARTIN D R, FOWLKES C C, MALIK J. Learning to detect natural image boundaries using local brightness, color, and texture cues[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004, 26(5): 530-549.
[29] CHENG Ming-ming, WARRELL J, LIN Wen-yan, et al. Efficient salient region detection with soft image abstraction[C]∥IEEE. 2013 IEEE International Conference on Computer Vision. New York: IEEE, 2013: 1529-1536.
[30] TONG Na, LU Hu-chuan, RUAN Xiang, et al. Salient object detection via bootstrap learning[C]∥IEEE. 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2015: 1884-1892.
[31] QIN Yao, LU Hu-chuan, XU Yi-qun, et al. Saliency detection via cellular automata[C]∥IEEE. 2015 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2015: 110-119.

Memo

Memo:
-
Last Update: 2016-10-20