Augmented Soundscapes

Description:

The capabilities of portable devices such as current generation mobile phones and media players is still severely limited in terms of computing power and storage space, such that only a subset of today’s analysis and synthesis algorithms can be run in realtime. The goal of this workshop is to develop techniques and applications around the concept of augmented soundscapes according to the following assumption: What if we had available now the computing power of current desktop devices in a mobile package? Without the limitations of today’s mobile technology, how would we design augmented reality applications? In this workshop we will create augmented soundscapes, starting from recordings of real soundscapes, that will be recorded by the participants during the first day of the workshop workshops. Using realtime machine listening techniques like onset detection, event extraction and segmentation, tempo tracking, etc. we will extract prominent features from real soundscapes. These features will be used to process and recompose the audio material with signal processing techniques such as granular resynthesis, concatenative synthesis, audio mosaicing and spectral modification. The processes to be developed by the participants can include –but are not limited to– electroacoustic compositions, musically augmented soundscapes, auditory display and data sonification applications, etc. The goal of the workshop is to develop a single particular augmented soundscape in a small group, with the focus of presenting the results in a final concert or interactive performance.

Details:

  • Tutor: Stefan Kersten.
  • Schedule: July 18, 18:00 – 19:30; July 19, 20, 15:00 – 19:30.
  • Room: TBA.

Participants:

  • Adrien Sirdey
  • Davide Andrea Mauro
  • Laurens J. van der Wee
  • Nina Bjelajacova
  • Noris N. Norowi
  • Romain Pangaud
  • Simone Spagnol
  • Thiago Duarte
  • Umut Simsekli