Guiding Monocular Depth Estimation Using Depth-Attention Volume

Recovering the scene depth from a single image is an ill-posed problem that requires additional priors, often referred to as monocular depth cues, to disambiguate different 3D interpretations. In recent works, those priors have been learned in an end-to-end manner from large datasets by using deep neural networks. In this paper, we propose guiding depth estimation to favor planar structures that are ubiquitous especially in indoor environments. This is achieved by incorporating a non-local coplanarity constraint to the network with a novel attention mechanism called depth-attention volume (DAV). Experiments on two popular indoor datasets, namely NYU-Depth-v2 and ScanNet, show that our method achieves state-of-the-art depth estimation results while using only a fraction of the number of parameters needed by the competing methods. Code is available at:

Huynh Lam, Nguyen-Ha Phong, Matas Jiri, Rahtu Esa, Heikkilä Janne

Publication type:
A4 Article in conference proceedings

Place of publication:
Computer Vision – ECCV 2020 – 16th European Conference, 2020, Proceedings

attention mechanism, depth estimation, Monocular depth


Full citation:
Huynh L., Nguyen-Ha P., Matas J., Rahtu E., Heikkilä J. (2020) Guiding Monocular Depth Estimation Using Depth-Attention Volume. In: Vedaldi A., Bischof H., Brox T., Frahm JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science, vol 12371. Springer, Cham.


Read the publication here: