LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Comparison Between Various Human Detectors and CNN-Based Feature Extractors for Human Activity Recognition Via Aerial Captured Video Sequences

Photo by paipai90 from unsplash

Human detection and activity recognition (HDAR) in videos plays an important role in various real-life applications. Recently, object detection methods such as “you only look once” (YOLO), faster region based… Click to show full abstract

Human detection and activity recognition (HDAR) in videos plays an important role in various real-life applications. Recently, object detection methods such as “you only look once” (YOLO), faster region based convolutional neural network (R-CNN), and EfficientDet have been used to detect humans in videos for subsequent decision-making applications. This paper aims to address the problem of human detection in aerial captured video sequences using a moving camera attached to an aerial platform with dynamical events such as varied altitudes, illumination changes, camera jitter, and variations in viewpoints, object sizes and colors. Unlike traditional datasets that have frames captured by a static ground camera with medium or large regions of humans in these frames, the UCF-ARG aerial dataset is more challenging because it contains videos with large distances between the humans in the frames and the camera. The performance of human detection methods that have been described in the literature are often degraded when input video frames are distorted by noise, blur, illumination changes, and the like. To address these limitations, the object detection methods used in this study were trained on the COCO dataset and evaluated on the publicly available UCF-ARG dataset. The comparison between these detectors was done in terms of detection accuracy. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that EfficientDetD7 was able to outperform other detectors with 92.9% average accuracy in detecting all activities and various conditions including blurring, addition of Gaussian noise, lightening, and darkening. Additionally, deep pre-trained convolutional neural networks (CNNs) such as ResNet and EfficientNet were used to transfer learning from the ImageNet dataset to the UCF-ARG dataset and to extract highly informative features from the detected and cropped human patches. The extracted spatial features were utilized by Long Short-Term Memory (LSTM) to consider temporal relations between features for human activity recognition (HAR). Experimental results found that the EfficientNetB7-LSTM was able to outperform existing HAR methods in terms of average accuracy (80%), average precision (83%), average recall (80%), average F1 score (80%), average false negative rate (FNR) (20%), average false positive rate (FPR) (4.8%), and average Area Under Curve (AUC) (94%). The outcome is a robust HAR system which combines EfficientDetD7, and EfficientNetB7 with LSTM for human detection and activity classification, respectively.

Keywords: human detection; aerial captured; detection; activity; activity recognition

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.