Vol. 113
Latest Volume
All Volumes
PIERM 126 [2024] PIERM 125 [2024] PIERM 124 [2024] PIERM 123 [2024] PIERM 122 [2023] PIERM 121 [2023] PIERM 120 [2023] PIERM 119 [2023] PIERM 118 [2023] PIERM 117 [2023] PIERM 116 [2023] PIERM 115 [2023] PIERM 114 [2022] PIERM 113 [2022] PIERM 112 [2022] PIERM 111 [2022] PIERM 110 [2022] PIERM 109 [2022] PIERM 108 [2022] PIERM 107 [2022] PIERM 106 [2021] PIERM 105 [2021] PIERM 104 [2021] PIERM 103 [2021] PIERM 102 [2021] PIERM 101 [2021] PIERM 100 [2021] PIERM 99 [2021] PIERM 98 [2020] PIERM 97 [2020] PIERM 96 [2020] PIERM 95 [2020] PIERM 94 [2020] PIERM 93 [2020] PIERM 92 [2020] PIERM 91 [2020] PIERM 90 [2020] PIERM 89 [2020] PIERM 88 [2020] PIERM 87 [2019] PIERM 86 [2019] PIERM 85 [2019] PIERM 84 [2019] PIERM 83 [2019] PIERM 82 [2019] PIERM 81 [2019] PIERM 80 [2019] PIERM 79 [2019] PIERM 78 [2019] PIERM 77 [2019] PIERM 76 [2018] PIERM 75 [2018] PIERM 74 [2018] PIERM 73 [2018] PIERM 72 [2018] PIERM 71 [2018] PIERM 70 [2018] PIERM 69 [2018] PIERM 68 [2018] PIERM 67 [2018] PIERM 66 [2018] PIERM 65 [2018] PIERM 64 [2018] PIERM 63 [2018] PIERM 62 [2017] PIERM 61 [2017] PIERM 60 [2017] PIERM 59 [2017] PIERM 58 [2017] PIERM 57 [2017] PIERM 56 [2017] PIERM 55 [2017] PIERM 54 [2017] PIERM 53 [2017] PIERM 52 [2016] PIERM 51 [2016] PIERM 50 [2016] PIERM 49 [2016] PIERM 48 [2016] PIERM 47 [2016] PIERM 46 [2016] PIERM 45 [2016] PIERM 44 [2015] PIERM 43 [2015] PIERM 42 [2015] PIERM 41 [2015] PIERM 40 [2014] PIERM 39 [2014] PIERM 38 [2014] PIERM 37 [2014] PIERM 36 [2014] PIERM 35 [2014] PIERM 34 [2014] PIERM 33 [2013] PIERM 32 [2013] PIERM 31 [2013] PIERM 30 [2013] PIERM 29 [2013] PIERM 28 [2013] PIERM 27 [2012] PIERM 26 [2012] PIERM 25 [2012] PIERM 24 [2012] PIERM 23 [2012] PIERM 22 [2012] PIERM 21 [2011] PIERM 20 [2011] PIERM 19 [2011] PIERM 18 [2011] PIERM 17 [2011] PIERM 16 [2011] PIERM 14 [2010] PIERM 13 [2010] PIERM 12 [2010] PIERM 11 [2010] PIERM 10 [2009] PIERM 9 [2009] PIERM 8 [2009] PIERM 7 [2009] PIERM 6 [2009] PIERM 5 [2008] PIERM 4 [2008] PIERM 3 [2008] PIERM 2 [2008] PIERM 1 [2008]
2022-09-13
Human Motion Recognition in Small Sample Scenarios Based on GaN and CNN Models
By
Progress In Electromagnetics Research M, Vol. 113, 101-113, 2022
Abstract
In the research of radar-based human motion classification and recognition, the traditional manual feature extraction is more complicated, and the echo dataset is generally smaller. In view of this problem, a method of human motion recognition in small sample scenarios based on Generative Adversarial Network (GAN) and Convolutional Neural Network (CNN) models is proposed. First, a 77 GHz millimeter wave radar data acquisition system is built to obtain echo data. Secondly, the collected human motion echo data is preprocessed, the micro-Doppler features are extracted, and the range Doppler map (RDM) is used to project the velocity dimension and accumulate the two-dimensional micro-Doppler time-frequency map dataset of the human motion frame by frame. Finally, the deep convolution generative adversarial network (DCGAN) is constructed to achieve data augmentation of the sample set, and the CNN is constructed to realize automatic feature extraction to complete the classification recognition of different human motions. Experimental studies have shown that the combination of GAN and CNN can achieve effective recognition of daily human motions, and the recognition accuracy can reach 96.5%. Compared with the manual feature extraction, the recognition accuracy of CNNs is improved by 7.3%. Compared with the original data set, the system recognition accuracy based on the sample augmentation data set is improved by 2.17%, which shows that the GAN has an excellent performance in human motion recognition in small sample scenarios.
Citation
Ying-Jie Zhong, and Qiusheng Li, "Human Motion Recognition in Small Sample Scenarios Based on GaN and CNN Models," Progress In Electromagnetics Research M, Vol. 113, 101-113, 2022.
doi:10.2528/PIERM22070204
References

1. Deng, P. and M. Wu, "Human motion and gesture recognition method based on machine learning," Chinese Journal of Inertial Technology, Vol. 30, No. 1, 37-43, 2022.

2. Luo, H., K. Tong, and F. Kong, "Review of human action recognition in video based on deep learning," Journal of Electronic Arts, Vol. 47, No. 5, 1162-1173, 2019.

3. Ding, Y., R. Liu, and X. Xu, "Micro-Doppler frequency estimation method for human target based on continuous wave radar," Journal of Central South University (Natural Science Edition), Vol. 53, No. 4, 1273-1280, 2022.

4. Bryan, J. D., J. Kwon, N. Lee, et al. "Application of ultra-wide band radar for classification of human activities," IET Radar, Sonar & Navigation, Vol. 6, No. 3, 172-179, 2012.
doi:10.1049/iet-rsn.2011.0101

5. Ding, C., L. Zhang, C. Gu, et al. "Non-contact human motion recognition based on UWB radar," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, Vol. 8, No. 2, 306-315, 2018.
doi:10.1109/JETCAS.2018.2797313

6. Jiang, L., X. Zhou, and L. Che, "Small-sample human action recognition based on carrier-free ultra-wideband radar," Journal of Electronic Engineering, Vol. 48, No. 3, 602-615, 2020.

7. Zheng, Y., G. Li, and Y. Li, "A review of the application of deep learning in image recognition," Computer Engineering and Applications, Vol. 55, No. 12, 20-36, 2019.

8. Li, A., M. Yuan, C. Zheng, et al. "Speech enhancement using progressive learning-based convolutional recurrent neural network," Applied Acoustics, Vol. 166, 107347, 2020.
doi:10.1016/j.apacoust.2020.107347

9. Prabhakar, S. K., D.-O. Won, and Y. Maleh, "Medical text classification using hybrid deep learning models with multihead attention," Computational Intelligence and Neuroscience, Vol. 2021, 9425655, 2021.

10. Kim, Y. and T. Moon, "Human detection and activity classification based on micro-Doppler signatures using deep convolutional neural networks," IEEE Geoscience and Remote Sensing Letters, Vol. 13, No. 1, 8-12, 2016.
doi:10.1109/LGRS.2015.2491329

11. Park, J., R. J. Javier, and Y. Kim, "Micro-Doppler based classification of human aquatic activities via transfer learning of convolutional neural networks," Sensors, Vol. 16, No. 12, 1990, 2016.
doi:10.3390/s16121990

12. Sun, X., K. Zhou, S. Shi, et al. "A new cyclical generative adversarial network based data augmentation method for multiaxial fatigue life prediction," International Journal of Fatigue, 162, 2022.

13. Jin, H., Y. Li, J. Qi, et al. "GrapeGAN: Unsupervised image enhancement for improved grape leaf disease recognition," Computers and Electronics in Agriculture, 198, 2022.

14. Alnujaim, I., D. Oh, and Y. Kim, "Generative adversarial networks for classification of micro-Doppler signatures of human activity," IEEE Geoscience and Remote Sensing Letters, Vol. 17, No. 3, 396-400, 2020.
doi:10.1109/LGRS.2019.2919770

15. Cha, D., S. Jeong, M. Yoo, et al. "Multi-input deep learning based FMCW radar signal classification," Electronics, Vol. 10, 1144, 2021.
doi:10.3390/electronics10101144

16. Chen, V. C., D. Tahmoush, and W. J. Miceli, "Radar micro-doppler signatures: Processing and applications," IET Digital Library, 406, 2014.

17. Jin, T., Y. He, X. Li, et al. "Research progress on human behavior perception by ultra-wideband radar," Journal of Electronics and Information, Vol. 44, No. 4, 1147-1155, 2022.