LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Deformable Convolutional Networks for Multimodal Human Activity Recognition Using Wearable Sensors

Photo by paipai90 from unsplash

Recent years have witnessed significant success of convolutional neural networks (CNNs) in human activity recognition (HAR) using wearable sensors. Nevertheless, prior works have an obvious drawback. An activity sample may… Click to show full abstract

Recent years have witnessed significant success of convolutional neural networks (CNNs) in human activity recognition (HAR) using wearable sensors. Nevertheless, prior works have an obvious drawback. An activity sample may contain heterogeneous sensor modalities from different body parts. Moreover, the significance of each modality will change over time. Because a normal convolution filter usually samples activity data at a fixed regular grid, it is hard to capture salient features of activities along different sensor modalities or time intervals. What is the best filter form for activity recognition still remains a challenging task. In this article, to resolve this issue, we present a new deformable convolutional network for recognizing human activities from intricate sensory data. Specifically, the learned offsets and the feature amplitudes are added into standard convolution, which can be modulated to allow more free-form deformation over the sampling grid for sensory data. Comparing previous results, we achieve state-of-the-art recognition accuracies, e.g., 82.91%, 80.02%, 97.35%, and 99.21%, respectively, on several benchmark HAR datasets, including OPPORTUNITY, UNIMIB-SHAR, USC-HAD, and WISDM, hence indicating the advantage of the proposed method. The visual analysis is provided, which shows that the deformation could be conditioned on different input activity samples. The receptive field and the sampling locations can be adjusted in an adaptive manner, which leads to better interpretability for deep model behaviors. Installing PyTorch on a Raspberry Pi 3 B plus system, we evaluate actual run time of the deformable model. The results show that the deformable filter is able to still maintain almost the same inference time, which is very beneficial for activity recognition tasks. Our work can promote further research by leveraging intermodulating information to connect the deformable convolution and attention modules.

Keywords: recognition; human activity; wearable sensors; using wearable; activity; activity recognition

Journal Title: IEEE Transactions on Instrumentation and Measurement
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.