We propose an Adaptive Weighted Spatiotemporal Distillation (AWSD) technique for video representation by encoding the appearance and dynamics of the videos into a single RGB image map. This is obtained by adaptively dividing the videos into small segments and comparing two consecutive segments. This allows using pre-trained models on still images for video classification while successfully capturing the spatiotemporal variations in the videos. The adaptive segment selection enables effective encoding of the essential discriminative information of untrimmed videos. Based on Gaussian Scale Mixture, we compute the weights by extracting the mutual information between two consecutive segments. Unlike pooling-based methods, our AWSD gives more importance to the frames that characterize actions or events thanks to its adaptive segment length selection. We conducted extensive experimental analysis to evaluate the effectiveness of our proposed method and compared our results against those of recent state-of-the-art methods on four benchmark datatsets, including UCF101, HMDB51, ActivityNet v1.3, and Maryland. The obtained results on these benchmark datatsets showed that our method significantly outperforms earlier works and sets the new state-of-the-art performance in video classification. Code is available at the project webpage:

Tavakolian Mohammad, Tavakoli Hamed R., Hadid Abdenour

Publication type:
A4 Article in conference proceedings

Place of publication:
2019 IEEE International Conference on Computer Vision (ICCV) : 27th October- 2nd Novenber 2019, Seoul, Korea

6G Publication


Full citation:
M. Tavakolian, H. R. Tavakoli and A. Hadid, “AWSD: Adaptive Weighted Spatiotemporal Distillation for Video Representation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 8019-8028.


Read the publication here: