Abstract
Although sound visualization and image sonification have been extensively used for scientific and artistic purposes, their combined effect is rarely considered. In this paper, we propose the use of an iterative visualization/sonification approach as a sound generation mechanism. In particular, we visualize sounds using a textural self-similarity representation, which is then analysed to generate control data for a granular synthesizer. Following an eco-systemic approach, the output of the synthesizer is then fed back into the system, thus generating a loop designed for the creation of novel time-evolving sounds. All the process is real-time, implemented in Max/MSP using the FTM library. A qualitative analysis of the approach is presented and complemented with a discussion about visualization and sonification issues in the context of sound design.
Original language | English (US) |
---|---|
Title of host publication | International Computer Music Conference, ICMC 2007 |
Publisher | International Computer Music Association |
Pages | 183-187 |
Number of pages | 5 |
State | Published - 2007 |
Event | International Computer Music Conference, ICMC 2007 - Copenhagen, Denmark Duration: Aug 27 2007 → Aug 31 2007 |
Other
Other | International Computer Music Conference, ICMC 2007 |
---|---|
Country/Territory | Denmark |
City | Copenhagen |
Period | 8/27/07 → 8/31/07 |
ASJC Scopus subject areas
- Media Technology
- Computer Science Applications
- Music