Learning from subject’s calibration data can significantly improve the performance of a steady-state visually evoked potential (SSVEP)-based brain–computer interface (BCI), for example, the state-of-the-art target recognition methods utilize the learned… Click to show full abstract
Learning from subject’s calibration data can significantly improve the performance of a steady-state visually evoked potential (SSVEP)-based brain–computer interface (BCI), for example, the state-of-the-art target recognition methods utilize the learned subject-specific and stimulus-specific model parameters. Unfortunately, when dealing with new stimuli or new subjects, new calibration data must be acquired, thus requiring laborious calibration sessions, which becomes a major challenge in developing high-performance BCIs for real-life applications. This study investigates the feasibility of transferring the model parameters (i.e., the spatial filters and the SSVEP templates) across two different groups of visual stimuli in SSVEP-based BCIs. According to our exploration, we can extract a common spatial filter from the spatial filters across different stimulus frequencies and a common impulse response from the SSVEP templates across different neighboring stimulus frequencies, in which the common spatial filter is considered as the transferred spatial filter and the common impulse response is utilized to reconstruct the transferred SSVEP template according to the theory that an SSVEP is a superposition of the impulse responses. Then, we develop a transfer learning canonical correlation analysis (tlCCA) incorporating the transferred model parameters. For evaluation, we compare the recognition performance of the calibration-free, the calibration-based, and the proposed tlCCA on an SSVEP data set with 60 subjects. Experiment results prove that the spatial filters share commonality across different frequencies and the impulse responses share commonality across neighboring frequencies. More importantly, the tlCCA performs significantly better than the calibration-free algorithms, comparably to the calibration-based algorithm. Note to Practitioners—This work is motivated by the long calibration time problem in using an steady-state visually evoked potential (SSVEP)-based brain–computer interface (BCI) because most state-of-the-art frequency recognition methods consider merely the situation that the calibration data and the test data are from the same subject and the same visual stimulus. This article assumes that the model parameters share the stimulus-nonspecific knowledge in a limited stimulus frequency range, and thus, the subject’s old calibration data can be reused to learn new model parameters for new visual stimuli. First, the model parameters can be decomposed into the stimulus-nonspecific knowledge (or subject-specific knowledge) and stimulus-specific knowledge. Second, the new model parameters can be generated via transferring the knowledge across stimulus frequencies. Then, a new recognition algorithm is developed using the transferred model parameters. Experiment results validate the assumptions, and moreover, the proposed scheme could be extended to other scenarios, such as when facing new subjects, or adopting new signal acquisition equipment, which would be helpful to the future development of zero-calibration SSVEP-based BCIs for real-life healthcare applications.
               
Click one of the above tabs to view related content.