Long-term Frame-Event Visual Tracking: Benchmark Dataset and Baseline
CoRR(2024)
摘要
Current event-/frame-event based trackers undergo evaluation on short-term
tracking datasets, however, the tracking of real-world scenarios involves
long-term tracking, and the performance of existing tracking algorithms in
these scenarios remains unclear. In this paper, we first propose a new
long-term and large-scale frame-event single object tracking dataset, termed
FELT. It contains 742 videos and 1,594,474 RGB frames and event stream pairs
and has become the largest frame-event tracking dataset to date. We re-train
and evaluate 15 baseline trackers on our dataset for future works to compare.
More importantly, we find that the RGB frames and event streams are naturally
incomplete due to the influence of challenging factors and spatially sparse
event flow. In response to this, we propose a novel associative memory
Transformer network as a unified backbone by introducing modern Hopfield layers
into multi-head self-attention blocks to fuse both RGB and event data.
Extensive experiments on both FELT and RGB-T tracking dataset LasHeR fully
validated the effectiveness of our model. The dataset and source code can be
found at .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要