MEG Evidence That Modality-Independent Conceptual Representations Contain Semantic and Visual Features

Julien Dirani, Liina Pylkkänen

    Research output: Contribution to journalArticlepeer-review

    Abstract

    The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word “cat” both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.

    Original languageEnglish (US)
    Article numbere0326242024
    JournalJournal of Neuroscience
    Volume44
    Issue number27
    DOIs
    StatePublished - Jul 3 2024

    Keywords

    • MEG
    • concepts
    • lexical
    • modality
    • semantic
    • visual

    ASJC Scopus subject areas

    • General Neuroscience

    Fingerprint

    Dive into the research topics of 'MEG Evidence That Modality-Independent Conceptual Representations Contain Semantic and Visual Features'. Together they form a unique fingerprint.

    Cite this