Low-delay interactive rendering of virtual acoustic environments with extensions for distributed low-delay transmission of audio and bio-physical sensor data

Giso Grimm, Angelika Kothe,Volker Hohmann

Journal of the Acoustical Society of America(2023)

引用 0|浏览0
暂无评分
摘要
In this study, we present a system that enables low-delay rendering of interactive virtual acoustics. The tool operates in the time domain based on a physical sound propagation model with basic room acoustic modelling and a block-wise update and interpolation of the environment geometry. During the pandemic, the tool was extended by low-delay network-transmission of audio and sensor data, e.g., from motion sensors or bio-physical sensors such as EEG. With this extension, distributed rendering of turn-taking conversations as well as ensemble music performances with individual head-tracked binaural rendering and interactive movement of directional sources is possible. Interactive communication requires a low time delay in sound transmission, which is particularly critical for musical communication, where the upper limit of tolerable delay is between 30 and 50 ms, depending on the genre. Our system can achieve latencies between 7 (dedicated local network) and 100 ms (intercontinental connection), with typical values of 25–40 ms. This is far below the delay achieved by typical video-conferencing tools and is sufficient for fluent speech communicationand music applications. In addition to a technical description of the system, we show here example measurement data of head motion behaviour in a distributed triadic conversation.
更多
查看译文
关键词
virtual acoustic environments,interactive rendering,audio,low-delay,low-delay,bio-physical
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要