Robust visual tracking via collaborative and reinforced convolutional feature learning

Convolutional neural networks are potent models that yield hierarchies of features and have drawn increasing interest in the visual tracking field. In the paper, we design an end-to-end trainable tracking framework based on Siamese network, which proposes to learn the low-level fine-grained and high-level semantic representations simultaneously with the aim of mutual benefit. Due to the distinct and complementary characteristics of the feature hierarchies, different tracking mechanisms are adopted for different feature layers. The low-level features are exploited and updated with a correlation filter layer for adaptive tracking and the high-level features are compared through cross-correlation directly for robust tracking. The two-level features are jointly trained with a multi-task loss function end-to-end. The proposed tracker takes full advantage of the adaptability of the low-level features and the generalization ability of the high-level features. Extensive experimental tracking results on the widely used OTB and TC128 benchmarks demonstrate the superiority of our tracker. Meanwhile, our proposed tracker can achieve a real-time tracking speed.

Authors:
Li Dongdong, Kuai Yangliu, Wen Gongjian, Liu Li

Publication type:
A4 Article in conference proceedings

Place of publication:
32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019

Keywords:
6G Publication

Published:

Full citation:
D. Li, Y. Kuai, G. Wen and L. Liu, “Robust Visual Tracking via Collaborative and Reinforced Convolutional Feature Learning,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 2019, pp. 592-600, doi: 10.1109/CVPRW.2019.00085

DOI:
https://doi.org/10.1109/CVPRW.2019.00085

Read the publication here:
http://urn.fi/urn:nbn:fi-fe2020110989739