OpenDIBR: Open Real-Time Depth-Image-Based renderer of light field videos for VR

Multimedia Tools and Applications(2024)

引用 0|浏览6
暂无评分
摘要
In this work, we present a novel light field rendering framework that allows a viewer to walk around a virtual scene reconstructed from a multi-view image/video dataset with visual and depth information. With immersive media applications in mind, the framework is designed to support dynamic scenes through input videos, give the viewer full freedom of movement in a large area, and achieve real-time rendering, even in Virtual Reality (VR). This paper explores how Depth-Image-Based Rendering (DIBR) is one of the few state-of-the-art techniques that achieves all requirements. We therefor implemented OpenDIBR, an Openly available DIBR, as a proof of concept for the framework. It uses Nvidia’s Video Codec SDK to rapidly decode the color and depth videos on the GPU. The decoded depth maps and color frames are then warped to the output view in OpenGL. Each input contribution is blended together through a per-pixel weighted average depending on the input and output camera positions. Experiments comparing the visual quality conclude that OpenDIBR is, objectively and subjectively, similar to TMIV and better than NeRF. Performancewise, OpenDIBR runs at 90 Hz for up to 4 full HD input videos on desktop, or 2–4 in VR, and there are options to further increase this by lowering the video bitrates, reducing the depth map resolution or dynamically lowering the number of rendered input videos.
更多
查看译文
关键词
Light field rendering,View synthesis,Depth-image-based rendering,Real time,Virtual Reality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要