Revisiting motion-based respiration measurement from videos

Video-based motion analysis gave rise to contactless respiration rate monitoring that measures subtle respiratory movement from a human chest or belly. In this paper, we revisit this technology via a large video benchmark that includes six categories of practical challenges. We analyze two video properties (i.e. pixel intensity variation and pixel movement) that are essential for respiratory motion analysis and various signal extraction approaches (i.e. from conventional to recent Convolutional Neural Network (CNN)-based methods). We find that pixel movement can better quantify respiratory motion than pixel intensity variation in various conditions. We also conclude that the simple conventional approach (e.g. Zerophase Component Analysis) can achieve better performance than CNN that uses data training to define the extraction of respiration signal, which thus raises a more general question whether CNN can improve video-based physiological signal measurement.

Zhan Qi, Hu Jingjing, Yu Zitong, Li Xiaobai, Wang Wenjin

A4 Article in conference proceedings

42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society, EMBC 2020

Q. Zhan, J. Hu, Z. Yu, X. Li and W. Wang, "Revisiting motion-based respiration measurement from videos," 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 2020, pp. 5909-5912, doi: 10.1109/EMBC44109.2020.9175662

https://doi.org/10.1109/EMBC44109.2020.9175662 http://urn.fi/urn:nbn:fi-fe2020110689602