Objective: The tradeoff between calibration effort and model performance still hinders the user experience for steady-state visual evoked brain-computer interfaces (SSVEP-BCI). To address this issue and improve model generalizability, this… Click to show full abstract
Objective: The tradeoff between calibration effort and model performance still hinders the user experience for steady-state visual evoked brain-computer interfaces (SSVEP-BCI). To address this issue and improve model generalizability, this work investigated the adaptation from the cross-dataset model to avoid the training process, while maintaining high prediction ability. Methods: When a new subject enrolls, a group of user-independent (UI) models is recommended as the representative from a multi-source data pool. The representative model is then augmented with online adaptation and transfer learning techniques based on user-dependent (UD) data. The proposed method is validated on both offline (N=55) and online (N=12) experiments. Results: Compared with the UD adaptation, the recommended representative model relieved approximately 160 trials of calibration efforts for a new user. In the online experiment, the time window decreased from 2 s to 0.56±0.2 s, while maintaining high prediction accuracy of 0.89-0.96. Finally, the proposed method achieved the average information transfer rate (ITR) of 243.49 bits/min, which is the highest ITR ever reported in a complete calibration-free setting. The results of the offline result were consistent with the online experiment. Conclusion: Representatives can be recommended even in a cross-subject/device/session situation. With the help of represented UI data, the proposed method can achieve sustained high performance without a training process. Significance: This work provides an adaptive approach to the transferable model for SSVEP-BCIs, enabling a more generalized, plug-and-play and high-performance BCI free of calibrations.
               
Click one of the above tabs to view related content.