The Binaural Display of Reverberant Space using MIMO State-Space Systems
Add to Google Calendar
This talk concerns the design of a framework for binaural auditory displays that allows for the display of sound sources that have spatial extent, are moving, and exist in reverberant environments. Conventional binaural displays operate by auralizing a source signal at one stationary far-field location in free-space. However, such binaural displays are well known to lack presence, and yield frequent localization errors. The present work proposes to solve this problem by auralizing the source signal at many location simultaneously, thus providing a framework that accommodates a broader range of sound sources. In particular, head-related transfer functions (HRTFs) are employed using a multiple-input multiple-output (MIMO) state-space system to efficiently implement a large number of similar transfer functions in parallel. Furthermore, the present work proposes the use of MIMO systems to address reverberant environments through the use of physical models and perceptual coding. Preliminary work is presented in which first a 'cloud' of nearby point sources is auralized by using a two-stage filter factorization. A more general case of many locations surrounding the listener is then considered by formulating the HRTF in state-space form using a single MIMO system. A novel extension to this method is then presented, along with an exploration of acoustic reflections in binaural displays. The proposed methods are validated using numerical simulations and a simple psychoacoustic experiment.