EarGram: An application for interactive exploration of concatenative sound synthesis in pure data

Gilberto Bernardes, Carlos Guedes, Bruce Pennycook

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper describes the creative and technical processes behind earGram, an application created with Pure Data for real-time concatenative sound synthesis. The system encompasses four generative music strategies that automatically rearrange and explore a database of descriptor-analyzed sound snippets (corpus) by rules other than their original temporal order into musically coherent outputs. Of note are the system's machine-learning capabilities as well as its visualization strategies, which constitute a valuable aid for decisionmaking during performance by revealing musical patterns and temporal organizations of the corpus.

Original languageEnglish (US)
Title of host publicationFrom Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, Revised Selected Papers
Pages110-129
Number of pages20
DOIs
StatePublished - 2013
Event9th International Symposium on Computer Music Modeling and Retrieval, CMMR 2012 - London, United Kingdom
Duration: Jun 19 2012Jun 22 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7900 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other9th International Symposium on Computer Music Modeling and Retrieval, CMMR 2012
Country/TerritoryUnited Kingdom
CityLondon
Period6/19/126/22/12

Keywords

  • Concatenative sound synthesis
  • generative music
  • recombination

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'EarGram: An application for interactive exploration of concatenative sound synthesis in pure data'. Together they form a unique fingerprint.

Cite this