Device-free wireless localization and activity recognition (DFLAR) is a new technique, which could estimate the location and activity of a target by analyzing its shadowing effect on surrounding wireless links.… Click to show full abstract
Device-free wireless localization and activity recognition (DFLAR) is a new technique, which could estimate the location and activity of a target by analyzing its shadowing effect on surrounding wireless links. This technique neither requires the target to be equipped with any device nor involves privacy concerns, which makes it an attractive and promising technique for many emerging smart applications. The key question of DFLAR is how to characterize the influence of the target on wireless signals. Existing work generally utilizes statistical features extracted from wireless signals, such as mean and variance in the time domain and energy as well as entropy in the frequency domain, to characterize the influence of the target. However, a feature suitable for distinguishing some activities or gestures may perform poorly when it is used to recognize other activities or gestures. Therefore, one has to manually design handcraft features for a specific application. Inspired by its excellent performance in extracting universal and discriminative features, in this paper, we propose a deep learning approach for realizing DFLAR. Specifically, we design a sparse autoencoder network to automatically learn discriminative features from the wireless signals and merge the learned features into a softmax-regression-based machine learning framework to realize location, activity, and gesture recognition simultaneously. Extensive experiments performed in a clutter indoor laboratory and an apartment with eight wireless nodes demonstrate that the DFLAR system using the learned features could achieve 0.85 or higher accuracy, which is better than the systems utilizing traditional handcraft features.
               
Click one of the above tabs to view related content.