The sensor-based human activity recognition (HAR) using machine learning requires a sufficiently large amount of annotated data to realize an accurate classification model. This requirement stimulates the advancement of the… Click to show full abstract
The sensor-based human activity recognition (HAR) using machine learning requires a sufficiently large amount of annotated data to realize an accurate classification model. This requirement stimulates the advancement of the transfer learning research area that minimizes the use of labeled data by transferring knowledge from the existing activity recognition domain. Existing approaches transform the data into a common subspace between domains which theoretically loses information, to begin with. Besides, they are based on the linear projection which is bound to linearity assumption and its limitations. Some recent works have already incorporated nonlinearity to find a latent representation that minimizes domain discrepancy based on an autoencoder that includes statistical distance minimization. However, such approach discovers latent representation for both domains at once, which causes sub-optimal representation because both domains compensate each other’s reconstruction error during the training. We propose an autoencoder-based approach on domain adaptation for sensor-based HAR. The proposed approach learns a latent representation which minimizes the discrepancy between domains by reducing statistical distance. Instead of learning representation of both domains simultaneously, our method is a two-phase approach which first learns the representation for the domain of interest independently to ensure its optimality. Subsequently, the useful information from the existing domain is transferred. We test our approach on the publicly available sensor-based HAR datasets, using cross-domain setup. The experimental result shows that our approach significantly outperforms the existing ones.
               
Click one of the above tabs to view related content.