TY - GEN
T1 - Perceptually-Guided Acoustic "Foveation"
AU - Peng, Xi
AU - Chen, Kenneth
AU - Roman, Iran
AU - Bello, Juan Pablo
AU - Sun, Qi
AU - Chakravarthula, Praneeth
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Realistic spatial audio rendering improves immersion in virtual environments. However, the computational complexity of acoustic propagation increases linearly with the number of sources. Consequently, real-time accurate acoustic rendering becomes challenging in highly dynamic scenarios such as virtual and augmented reality (VR/AR). Exploiting the fact that human spatial sensitivity of acoustic sources is not equal at azimuth eccentricities in the horizontal plane, we introduce a perceptually-aware acoustic "foveation"guidance model to the audio rendering pipeline, which can integrate audio sources that are not spatially resolvable by human listeners. To this end, we first conduct a series of psychophysical studies to measure the minimum resolvable audible angular distance under various spatial and background conditions. We leverage this data to derive an azimuth-characterized real-time acoustic foveation algorithm. Numerical analysis and subjective user studies in VR environments demonstrate our method's effectiveness in significantly reducing acoustic rendering workload, without compromising users' spatial perception of audio sources. We believe that the presented research will motivate future investigation into the new frontier of modeling and leveraging human multimodal perceptual limitations - beyond the extensively studied visual acuity - for designing efficient VR/AR systems.
AB - Realistic spatial audio rendering improves immersion in virtual environments. However, the computational complexity of acoustic propagation increases linearly with the number of sources. Consequently, real-time accurate acoustic rendering becomes challenging in highly dynamic scenarios such as virtual and augmented reality (VR/AR). Exploiting the fact that human spatial sensitivity of acoustic sources is not equal at azimuth eccentricities in the horizontal plane, we introduce a perceptually-aware acoustic "foveation"guidance model to the audio rendering pipeline, which can integrate audio sources that are not spatially resolvable by human listeners. To this end, we first conduct a series of psychophysical studies to measure the minimum resolvable audible angular distance under various spatial and background conditions. We leverage this data to derive an azimuth-characterized real-time acoustic foveation algorithm. Numerical analysis and subjective user studies in VR environments demonstrate our method's effectiveness in significantly reducing acoustic rendering workload, without compromising users' spatial perception of audio sources. We believe that the presented research will motivate future investigation into the new frontier of modeling and leveraging human multimodal perceptual limitations - beyond the extensively studied visual acuity - for designing efficient VR/AR systems.
KW - Mixed/Augmented reality
KW - Perception
KW - Virtual reality
UR - http://www.scopus.com/inward/record.url?scp=105002732078&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105002732078&partnerID=8YFLogxK
U2 - 10.1109/VR59515.2025.00069
DO - 10.1109/VR59515.2025.00069
M3 - Conference contribution
AN - SCOPUS:105002732078
T3 - Proceedings - 2025 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2025
SP - 450
EP - 460
BT - Proceedings - 2025 IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 32nd IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2025
Y2 - 8 March 2025 through 12 March 2025
ER -