Depth map fusion is an essential part in both stereo and RGB-D based 3- D reconstruction pipelines. Whether produced with a passive stereo reconstruction or using an active depth sensor, such as Microsoft Kinect, the depth maps have noise and may have poor initial registration. In this paper, we introduce a method which is capable of handling outliers, and especially, even significant registration errors. The proposed method first fuses a sequence of depth maps into a single non-redundant point cloud so that the redundant points are merged together by giving more weight to more certain measurements. Then, the original depth maps are re-registered to the fused point cloud to refine the original camera extrinsic parameters. The fusion is then performed again with the refined extrinsic parameters. This procedure is repeated until the result is satisfying or no significant changes happen between iterations. The method is robust to outliers and erroneous depth measurements as well as even significant depth map registration errors due to inaccurate initial camera poses.
Ylimäki Markus, Heikkilä Janne, Kannala Juho
A4 Article in conference proceedings
Place of publication:
2018 24th International Conference on Pattern Recognition (ICPR)
M. Ylimäki, J. Heikkilä and J. Kannala, “Accurate 3-D Reconstruction with RGB-D Cameras using Depth Map Fusion and Pose Refinement,” 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, 2018, pp. 1977-1982. doi: 10.1109/ICPR.2018.8545508
Read the publication here: