This paper considers the auditory attention detection (AAD) paradigm, where the goal is to determine which of two simultaneous speakers a person is attending to. The paradigm relies on recordings… Click to show full abstract
This paper considers the auditory attention detection (AAD) paradigm, where the goal is to determine which of two simultaneous speakers a person is attending to. The paradigm relies on recordings of the listener’s brain activity, e.g., from electroencephalography (EEG). To perform AAD, decoded EEG signals are typically correlated with the temporal envelopes of the speech signals of the separate speakers. In this paper, we study how the inclusion of various degrees of auditory modelling in this speech envelope extraction process affects the AAD performance, where the best performance is found for an auditory-inspired linear filter bank followed by power law compression. These two modelling stages are computationally cheap, which is important for implementation in wearable devices, such as future neuro-steered auditory prostheses. We also introduce a more natural way to combine recordings (over trials and subjects) to train the decoder, which reduces the dependence of the algorithm on regularization parameters. Finally, we investigate the simultaneous design of the EEG decoder and the audio subband envelope recombination weights vector using either a norm-constrained least squares or a canonical correlation analysis, but conclude that this increases computational complexity without improving AAD performance.
               
Click one of the above tabs to view related content.