8th Speech in Noise Workshop, 7-8 January 2016, Groningen

EEG decoding of continuous speech in realistic acoustic scenes

Jens Hjortkjær(a)
Technical University of Denmark

(a) Presenting

A number of studies have recently demonstrated that EEG measurements of cortical oscillations entrained to slow speech envelope fluctuations (<10 Hz) can be used to decode which of two competing talkers a listener is attending to. Although auditory cortex is thought to represent speech in a robust and noise-invariant fashion, it is still unknown whether EEG can be used to decode speech in more challenging real-life acoustic environments. To examine this, we have been recording EEG responses to natural speech in various realistic acoustic scenarios. Using a multi-speaker spherical array with ambisonics technology we simulated real rooms with varying degree of reverberation and competing noise sources. Natural speech targets to be detected were distributed at different positions in the virtual rooms. Analysis of single-trial continuous EEG recordings made in these virtual scenes suggest that low-frequency oscillatory activity can be used to decode both (a) what speech source the listener is attending to and (b) the spatial direction of the sound source, even in scenarios involving considerable reverberation and multiple interfering talkers.

Last modified 2016-05-12 14:22:09