Polysomnography (PSG) is considered the gold standard for sleep staging but is labor-intensive and expensive. Wrist wearables are an alternative to PSG because of their small form factor and continuous… Click to show full abstract
Polysomnography (PSG) is considered the gold standard for sleep staging but is labor-intensive and expensive. Wrist wearables are an alternative to PSG because of their small form factor and continuous monitoring capability. In this work, we present a scheme to perform such automated sleep staging via deep learning in the MESA cohort validated against PSG. This scheme makes use of actigraphic activity counts and two coarse heart rate measures (only mean and standard deviation for 30-s sleep epochs) to perform multi-class sleep staging. Our method outperforms existing techniques in three-stage classification (i.e., wake, NREM, and REM) and is feasible for four-stage classification (i.e., wake, light, deep, and REM). Our technique uses a combined convolutional neural network coupled and sequence-to-sequence network architecture to appropriate the temporal correlations in sleep toward classification. Supervised training with PSG stage labels for each sleep epoch as the target was performed. We used data from MESA participants randomly assigned to non-overlapping training (N=608) and validation (N=200) cohorts. The under-representation of deep sleep in the data leads to class imbalance which diminishes deep sleep prediction accuracy. To specifically address the class imbalance, we use a novel loss function that is minimized in the network training phase. Our network leads to accuracies of 78.66% and 72.46% for three-class and four-class sleep staging respectively. Our three-stage classifier is especially accurate at measuring NREM sleep time (predicted: 4.98 ± 1.26 hrs. vs. actual: 5.08 ± 0.98 hrs. from PSG). Similarly, our four-stage classifier leads to highly accurate estimates of light sleep time (predicted: 4.33 ± 1.20 hrs. vs. actual: 4.46 ± 1.04 hrs. from PSG) and deep sleep time (predicted: 0.62 ± 0.65 hrs. vs. actual: 0.63 ± 0.59 hrs. from PSG). Lastly, we demonstrate the feasibility of our method for sleep staging from Apple Watch-derived measurements. This work demonstrates the viability of high-accuracy, automated multi-class sleep staging from actigraphy and coarse heart rate measures that are device-agnostic and therefore well suited for extraction from smartwatches and other consumer wrist wearables. This work was supported in part by the NIH grant 1R21AG068890-01 and the American Association for University Women.
               
Click one of the above tabs to view related content.