Semantic Reconstruction: Reconstruction of Semantically Segmented 3D Meshes via Volumetric Semantic Fusion.

COMPUTER GRAPHICS FORUM(2018)

引用 7|浏览15
暂无评分
摘要
Semantic segmentation partitions a given image or 3D model of a scene into semantically meaning parts and assigns predetermined labels to the parts. With well-established datasets, deep networks have been successfully used for semantic segmentation of RGB and RGB-D images. On the other hand, due to the lack of annotated large-scale 3D datasets, semantic segmentation for 3D scenes has not yet been much addressed with deep learning. In this paper, we present a novel framework for generating semantically segmented triangular meshes of reconstructed 3D indoor scenes using volumetric semantic fusion in the reconstruction process. Our method integrates the results of CNN-based 2D semantic segmentation that is applied to the RGB-D stream used for dense surface reconstruction. To reduce the artifacts from noise and uncertainty of single-view semantic segmentation, we introduce adaptive integration for the volumetric semantic fusion and CRF-based semantic label regularization. With these methods, our framework can easily generate a high-quality triangular mesh of the reconstructed 3D scene with dense (i.e., per-vertex) semantic labels. Extensive experiments demonstrate that our semantic segmentation results of 3D scenes achieves the state-of-the-art performance compared to the previous voxel-based and point cloud-based methods.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要