Abstract
We treat magnetoencephalographic (MEG) data in a signal detection framework to discriminate between different phonemes heard by a test subject. Our data set consists of responses evoked by the voiced syllables /bae and /dae/ and the corresponding voiceless syllables /pae/ and /tae/. The data yield well to principal component analysis (PCA), with a reasonable subspace in the order of three components out of 37 channels. To discriminate between responses to the voiced and voiceless versions of a consonant we form a feature vector by either matched filtering or wavelet packet decomposition and use a mixture-of-experts model to classify the stimuli. Both choices of a feature vector lead to a significant detection accuracy. Furthermore, we show how to estimate the onset time of a stimulus from a continuous data stream. (C) 2000 Elsevier Science B.V.
Original language | English (US) |
---|---|
Pages (from-to) | 153-165 |
Number of pages | 13 |
Journal | Neurocomputing |
Volume | 31 |
Issue number | 1-4 |
DOIs | |
State | Published - Mar 2000 |
Keywords
- MEG data
- Phoneme discrimination
- Signal detection
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence