Unconstrained human activities recognition with a radar network is considered. A hybrid classifier combining both convolutional neural networks (CNNs) and recurrent neural networks (RNNs) for spatial–temporal pattern extraction is proposed.… Click to show full abstract
Unconstrained human activities recognition with a radar network is considered. A hybrid classifier combining both convolutional neural networks (CNNs) and recurrent neural networks (RNNs) for spatial–temporal pattern extraction is proposed. The 2-D CNNs (2D-CNNs) are first applied to the radar data to perform spatial feature extraction on the input spectrograms. Subsequently, gated recurrent units with bidirectional implementations are used to capture the long- and short-term temporal dependencies in the feature maps generated by the 2D-CNNs. Three NN-based data fusion methods were explored and compared with utilize the rich information provided by the different radar nodes. The performance of the proposed classifier was validated rigorously using the K-fold cross-validation (CV) and leave-one-person-out (L1PO) methods. Unlike competitive research, the dataset with continuous human activities with seamless interactivity transitions that can occur at any time and unconstrained moving trajectories of the participants has been collected and used for evaluation purposes. Classification accuracy of about 90.8% is achieved for nine-class human activity recognition (HAR) by the proposed classifier with the halfway fusion method.
               
Click one of the above tabs to view related content.