Phoneme discrimination from MEG data

Tuomas Lukka, Bernd Schoner, Alec Marantz

Research output: Contribution to journalArticlepeer-review

Abstract

We treat magnetoencephalographic (MEG) data in a signal detection framework to discriminate between different phonemes heard by a test subject. Our data set consists of responses evoked by the voiced syllables /bae and /dae/ and the corresponding voiceless syllables /pae/ and /tae/. The data yield well to principal component analysis (PCA), with a reasonable subspace in the order of three components out of 37 channels. To discriminate between responses to the voiced and voiceless versions of a consonant we form a feature vector by either matched filtering or wavelet packet decomposition and use a mixture-of-experts model to classify the stimuli. Both choices of a feature vector lead to a significant detection accuracy. Furthermore, we show how to estimate the onset time of a stimulus from a continuous data stream. (C) 2000 Elsevier Science B.V.

Original languageEnglish (US)
Pages (from-to)153-165
Number of pages13
JournalNeurocomputing
Volume31
Issue number1-4
DOIs
StatePublished - Mar 2000

Keywords

  • MEG data
  • Phoneme discrimination
  • Signal detection

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Phoneme discrimination from MEG data'. Together they form a unique fingerprint.

Cite this