Improved accuracy of PPG estimation using a camera by 3D tracking of the head
In recent years, using imageing plethysmography (iPPG) and camera Non-contact measuring hand that is relatively easy and flexible to detect biological signals It is attracting attention as a method. Such a pair of iPPG and camera Self-driving cars, newborn monitoring and telemedicine It can be integrated into various applications such as, and is being put to practical use. Although it is, the camera and detection target can be freely set. Since it moves, there is a problem that stable measurement is difficult. There Research to solve this problem by tracking 2D features in It is done, but it becomes unstable during dynamic movement. In this paper, movement is performed by using a three-dimensional face shape for tracking. We aim to build a robust method.
Algorithm flow

iPPG using 3D tracking method

Obtain 3D point cloud information from depth information and obtain By performing more accurate face tracking, rPPG for movement We propose a method to improve robustness. In the proposed method, rPPG is extracted from the RGBD image sequence. calculate. Rigid body transformation parameters are Iterative Closest Point Estimate using the (ICP) algorithm. In all frames Visible-Mask, a mask of observable pixels, is all Generated using rigid transformation parameters estimated in the frame To do. Converted using rigid transformation parameters and projected onto the image plane Calculate the brightness series using the patch and Visble-Mask Create a patch mask for this. 1 in the image series Define the ram as a reference frame. Reference frame point The group data is used as a reference point group for alignment. In the reference frame, the reference patch mask and the reference patch mask Point cloud data is also set. Point cloud data of reference patch Rigid body transformation using the estimated rigid transformation parameters, Track reference patch masks by projecting point clouds onto the image plane I do.

Alignment of 3D point cloud

Align the point cloud of the reference frame with the point cloud of other frames To do this, we apply the ICP algorithm to the point cloud. Rigid body change Multi-point search distance for estimation of conversion parameters A scaled ICP algorithm was used. Multiscale ICP first aligns with respect to global scope And then align with respect to the local range.
Alignment flow

Disable occlusion area

The reference patch used for tracking in the proposed method is fixed Therefore, part of the reference patch is hidden by the movement of the subject. May occur. Due to such occlusion, the brightness system Increased noise in the column and reduced accuracy of heart rate estimation To do. Remove the occlusion area to resolve this issue Need to be. Estimate the rigid transformation parameters of each frame After that, the point cloud of the reference frame is rigidized by the estimated parameters. Transform the body. The points in the point cloud of the converted reference frame are turtles If they are hidden by other point clouds when viewed from the position Is deleted as an occlusion.

Creation of observation patch

Each brightness of the brightness series used for heart rate estimation is the average in the patch Use the brightness value. 68 faces to create this patch I used the landmark of. Of the face patch used for brightness calculation Position is very important, forehead and cheeks extract rPPG signal It is known that it is a suitable area for doing things. Experiment All subjects at that time had this because the area of their the forehead was covered by hair. Only cheek patches were used in the study. For reference frame Create a Raw-Mask for the cheek area using facial landmarks It is made. Observed in both Raw-Mask and Visible-Mask Pixels are used as observation patches. As above Since the observation patch obtained in the above may have holes, Fill-in-the-blank processing was performed using a rhophology transformation. Heart rate estimation The observations obtained in this way are used to create the luminance series for Use the average brightness value of the pixels in the patch
Flow of creating observation patch
As a result of comparing the 3D tracking method of the proposed method with the conventional 2D tracking method, The effectiveness of tracking by the proposed method was shown.

Comparison of proposed method (3D tracking) and 2D tracking method. (Left) Results in static subjects (Right) Results in dynamic subjects


Publications
Kawasaki Laboratory