|Table of Contents|

Hand gesture recognition method in driver's phone-call behavior based on decision fusion of image features(PDF)

《交通运输工程学报》[ISSN:1671-1637/CN:61-1369/U]

Issue:
2019年04期
Page:
171-181
Research Field:
交通信息工程及控制
Publishing date:

Info

Title:
Hand gesture recognition method in driver's phone-call behavior based on decision fusion of image features
Author(s):
CHENG Wen-dong1 MA Yong2 WEI Qing-yuan3
(1. School of Mechatronic Engineering, Xi'an Technological University, Xi'an 710021, Shaanxi, China; 2. School of Automobile, Chang'an University, Xi'an 710064, Shaanxi, China; 3. School of Mechanical Engineering, Harbin Institute of Petroleum, Harbin 150028, Heilongjiang, China)
Keywords:
information processing hand gesture recognition of phone-call machine vision skin color model HOG feature PZMs feature decision fusion
PACS:
U491.512
DOI:
-
Abstract:
In order to detect drivers' phone-call behavior robustly in natural environment, a hand gesture recognition method was proposed. The Adaboost algorithm was used to detect driver's face region. InYCgCr color space, the brightness component and chroma component of facial skin were sampled by sparse grid, respectively, and a Gaussian distribution model of skin color was built. Considering the inhomogeneity of cab illumination, a skin color component drift compensation algorithm was proposed, and an online skin color model was established to adapt the changes of illumination, so that the skin color regions of right and left hands can be accurately segmented. The 2 376 dimensions HOG feature vector of hand skin region was extracted by HOG algorithm, and then PCA method was used to reduce HOG feature vector to 400 dimensions. Meanwhile, the PZMs features of hand skin region were extracted and 8 PZMs feature vectors with the largest weights were screened out by Relief algorithm. A support vector machine classifier decision for phone-call hand gesture was established based on the PCA-HOG and Relief-PZMs features. Experimental result shows that the hand gesture recognition rate based on the PCA-HOG features is 93.1%, and it has good robust to illumination changes but is easily disturbed by hand and head rotation. The hand gesture recognition rate based on the Relief-PZMs features is 91.9%, and it has good tolerance to head and hand gestures but has poor illumination robustness. The hand gesture recognition rate of the proposed multi-feature-fusion method combined with the PCA-HOG and Relief-PZMs is up to 94.5%, and it has good adaptability to illumination fluctuate, hand and head rotation, and other interference conditions. 2 tabs, 15 figs, 31 refs.

References:

[1] WHITE K M, HYDE M K, WALSH M P, et al. Mobile phone use while driving: an investigation of the beliefs influencing drivers' hands-free and hand-held mobile phone use[J]. Transportation Research Part F: Traffic Psychology and Behavior, 2010, 13(1): 9-20.
[2] 隋 毅.基于驾驶模拟实验的手机通话对驾驶安全的影响研究[D].北京:北京交通大学,2013.
SUI Yi. Influence of cell phone use on driving safety base on driving simulator experiments[D]. Beijing: Beijing Jiaotong University, 2013.(in Chinese)
[3] ABDUL SHABEER H, WAHIDABANU R S D. Cell phone accident avoidance system while driving[J]. International Journal of Soft Computing and Engineering, 2011, 1(4): 144-147.
[4] RODRIGUEZ-ASCARIZ J M, BOQUETE L, CANTOS J, et al. Automatic system for detecting driver use of mobile phones[J]. Transportation Research Part C: Emerging Technologies, 2011, 19(4): 673-681.
[5] 张 波,王文军,魏民国,等.基于机器视觉的驾驶人使用手持电话行为检测[J].吉林大学学报(工学版),2015,45(5):1688-1695.
ZHANG Bo, WANG Wen-jun, WEI Min-guo, et al. Detection handheld phone use by driver based on machine vision[J]. Journal of Jilin University(Engineering and Technology Edition), 2015, 45(5): 1688-1695.(in Chinese)
[6] WANG Dan, PEI Ming-tao, ZHU Lan. Detecting driver use of mobile phone based on in-car camera[C]∥IEEE. 10th International Conference on Computational Intelligence and Security. New York: IEEE, 2014: 148-151.
[7] ZHAO Chi-liang, GAO Yong-sheng, HE Jie, et al. Recognition of driving postures by multiwavelet transform and multilayer perceptron classifier[J]. Engineering Applications of Artificial Intelligence, 2012, 25(8): 1677-1686.
[8] STERGIOPOULOU E, SGOUROPOULOS K, NIKOLAOU N, et al. Real time hand detection in a complex background[J]. Engineering Applications of Artificial Intelligence, 2014, 35: 54-70.
[9] BAN Y, KIM S K, KIM S, et al. Face detection based on skin color likelihood[J]. Pattern Recognition, 2014, 47(4): 1573-1585.
[10] KHAN R, HANBURY A, STÖTTINGER J, et al. Color based skin classification[J]. Pattern Recognition Letters, 2012, 33(2): 157-163.
[11] 孙 瑾,丁永晖,周 来.融合红外深度信息的视觉交互手部跟踪算法[J].光学学报,2017,37(1):0115002-1-11.
SUN Jin, DING Yong-hui, ZHOU Lai. Visually interactive hand tracking algorithm combined with infrared depth information[J]. Acta Optica Sinica, 2017, 37(1): 0115002-1-11.(in Chinese)
[12] SESHADRI K, JUEFEI-XU F, PAL D K, et al. Driver cell phone usage detection on Strategic Highway Research Program(SHRP2)face view videos[C]∥IEEE. IEEE Conference on Computer Vision and Pattern Recognition Workshops. New York: IEEE, 2015: 35-43.
[13] LIU Yun, YIN Yan-min, ZHANG Shu-jun. Hand gesture recognition based on Hu moments in interaction of virtual reality[C]∥IEEE. 2012 4th International Conference on Intelligent Human-Machine Systems and Cybernetics. New York: IEEE, 2012: 145-148.
[14] ARTAN Y, BULAN O, LOCE R P, et al. Driver cell phone usage detection from HOV/HOT NIR images[C]∥IEEE. 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops. New York: IEEE, 2014: 225-230.
[15] 张汗灵,李红英,周 敏.融合多特征和压缩感知的手势识别[J].湖南大学学报(自然科学版),2013,40(3):87-92.
ZHANG Han-ling, LI Hong-ying, ZHOU Min. Hand posture recognition based on multi-feature and compressive sensing[J]. Journal of Hunan University(Natural Sciences), 2013, 40(3): 87-92.(in Chinese)
[16] CHAE Y N, HAN T, SEO Y H, et al. An efficient face detection based on color-filtering and its application to smart devices[J]. Multimedia Tools and Applications, 2016, 75(9): 4867-4886.
[17] KALIRAJ K, MANIMARAN S. Robust skin color-based
moving object detection for video surveillance[J]. Journal of Electronic Imaging, 2016, 25(4): 043007-1-8.
[18] DIOS J J D, GARCIA N. Face detection based on a new color space YCgCr[C]∥IEEE. 2003 International Conference on Image Processing. New York: IEEE, 2003: 909-912.
[19] 程文冬,付 锐,袁 伟,等.驾驶人注意力分散的图像检测与分级预警[J].计算机辅助设计与图形学学报,2016,28(8):1287-1296.
CHENG Wen-dong, FU Rui, YUAN Wei, et al. Driver attention distraction detection and hierarchical prewarning based on machine vision[J]. Journal of Computer-Aided Design and Computer Graphics, 2016, 28(8): 1287-1296.(in Chinese)
[20] 梁敏健,崔啸宇,宋青松,等.基于HOG-Gabor特征融合与Softmax分类器的交通标志识别方法[J].交通运输工程学报,2017,17(3):151-158.
LIANG Min-jian, CUI Xiao-yu, SONG Qing-song, et al. Traffic sign recognition method based on HOG-Gabor feature fusion and Softmax classifier[J]. Journal of Traffic and Transportation Engineering, 2017, 17(3): 151-158.(in Chinese)
[21] ZHENG Jin-qing, FENG Zhi-yong, XU Chao, et al. Fusing shape and spatio-temporal features for depth-based dynamic hand gesture recognition[J]. Multimedia Tools and Applications, 2017, 76(20): 20525-20544.
[22] SAVAKIS A, SHARMA R, KUMAR M. Efficient eye detection using HOG-PCA descriptor[C]∥SPIE. Imaging and Multimedia Analytics in a Web and Mobile World 2014. Bellingham: SPIE, 2014: 1-8.
[23] WOLD S, ESBENSEN K, GELADI P. Principal component analysis[J]. Chemometrics and Intelligent Laboratory Systems, 1987, 2(1-3): 37-52.
[24] DENG An-wen, GWO Chih-ying. Fast and stable algorithms for high-order Pseudo Zernike moments and image reconstruction[J]. Applied Mathematics and Computation, 2018, 334: 239-253.
[25] BERA A, KLESK P, SYCHEL D. Constant-time calculation of Zernike moments for detection with rotational invariance[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 41(3): 537-551.
[26] JIA Jian-hua, YANG Ning, ZHANG Chao, et al. Object-oriented feature selection of high spatial resolution images using an improved Relief algorithm[J]. Mathematical and Computer Modelling, 2013, 58(3/4): 619-626.
[27] BURGES C J C. A tutorial on support vector machines for pattern recognition[J]. Data Mining and Knowledge Discovery, 1998, 2: 121-167.
[28] 秦华标,李雪梅,仝锡民,等.复杂环境下基于多特征决策融合的眼睛状态识别[J].光电子·激光,2014,25(4):777-783.
QIN Hua-biao, LI Xue-mei, TONG Xi-min, et al. Eye state recognition in complex environment based on multi-feature decision fusion[J]. Journal of Optoelectronics·Laser, 2014, 25(4): 777-783.(in Chinese)
[29] SUN Ya-xin, WEN Gui-hua, WANG Jia-bing. Weighted spectral features based on local Hu moments for speech emotion recognition[J]. Biomedical Signal Processing and Control, 2015, 18: 80-90.
[30] 聂隐愚,唐 兆,常 建,等.基于单目图像的列车事故场景三维重建[J].交通运输工程学报,2017,17(1):149-158.
NIE Yin-yu, TANG Zhao, CHANG Jian, et al. 3D reconstruction of train accident scene based on monocular image[J]. Journal of Traffic and Transportation Engineering, 2017, 17(1): 149-158.(in Chinese)
[31] LUO J, GWUN O. A comparison of SIFT, PCA-SIFT and SURF[J]. International Journal of Image Processing, 2009, 3(4): 1-10.

Memo

Memo:
-
Last Update: 2019-09-03