P2LHAP:Wearable sensor-based human activity recognition, segmentation and forecast through Patch-to-Label Seq2Seq Transformer
arxiv(2024)
摘要
Traditional deep learning methods struggle to simultaneously segment,
recognize, and forecast human activities from sensor data. This limits their
usefulness in many fields such as healthcare and assisted living, where
real-time understanding of ongoing and upcoming activities is crucial. This
paper introduces P2LHAP, a novel Patch-to-Label Seq2Seq framework that tackles
all three tasks in a efficient single-task model. P2LHAP divides sensor data
streams into a sequence of "patches", served as input tokens, and outputs a
sequence of patch-level activity labels including the predicted future
activities. A unique smoothing technique based on surrounding patch labels, is
proposed to identify activity boundaries accurately. Additionally, P2LHAP
learns patch-level representation by sensor signal channel-independent
Transformer encoders and decoders. All channels share embedding and Transformer
weights across all sequences. Evaluated on three public datasets, P2LHAP
significantly outperforms the state-of-the-art in all three tasks,
demonstrating its effectiveness and potential for real-world applications.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要