SpikeNVS: Enhancing Novel View Synthesis from Blurry Images via Spike Camera
arxiv(2024)
摘要
One of the most critical factors in achieving sharp Novel View Synthesis
(NVS) using neural field methods like Neural Radiance Fields (NeRF) and 3D
Gaussian Splatting (3DGS) is the quality of the training images. However,
Conventional RGB cameras are susceptible to motion blur. In contrast,
neuromorphic cameras like event and spike cameras inherently capture more
comprehensive temporal information, which can provide a sharp representation of
the scene as additional training data. Recent methods have explored the
integration of event cameras to improve the quality of NVS. The event-RGB
approaches have some limitations, such as high training costs and the inability
to work effectively in the background. Instead, our study introduces a new
method that uses the spike camera to overcome these limitations. By considering
texture reconstruction from spike streams as ground truth, we design the
Texture from Spike (TfS) loss. Since the spike camera relies on temporal
integration instead of temporal differentiation used by event cameras, our
proposed TfS loss maintains manageable training costs. It handles foreground
objects with backgrounds simultaneously. We also provide a real-world dataset
captured with our spike-RGB camera system to facilitate future research
endeavors. We conduct extensive experiments using synthetic and real-world
datasets to demonstrate that our design can enhance novel view synthesis across
NeRF and 3DGS. The code and dataset will be made available for public access.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要