SigFormer: Sparse Signal-Guided Transformer for Multi-Modal Human Action Segmentation
CoRR(2023)
摘要
Multi-modal human action segmentation is a critical and challenging task with
a wide range of applications. Nowadays, the majority of approaches concentrate
on the fusion of dense signals (i.e., RGB, optical flow, and depth maps).
However, the potential contributions of sparse IoT sensor signals, which can be
crucial for achieving accurate recognition, have not been fully explored. To
make up for this, we introduce a Sparse signalguided Transformer (SigFormer) to
combine both dense and sparse signals. We employ mask attention to fuse
localized features by constraining cross-attention within the regions where
sparse signals are valid. However, since sparse signals are discrete, they lack
sufficient information about the temporal action boundaries. Therefore, in
SigFormer, we propose to emphasize the boundary information at two stages to
alleviate this problem. In the first feature extraction stage, we introduce an
intermediate bottleneck module to jointly learn both category and boundary
features of each dense modality through the inner loss functions. After the
fusion of dense modalities and sparse signals, we then devise a two-branch
architecture that explicitly models the interrelationship between action
category and temporal boundary. Experimental results demonstrate that SigFormer
outperforms the state-of-the-art approaches on a multi-modal action
segmentation dataset from real industrial environments, reaching an outstanding
F1 score of 0.958. The codes and pre-trained models have been available at
https://github.com/LIUQI-creat/SigFormer.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要