Capturing Detailed Deformations of Moving Human Bodies

SIGGRAPH 2021

University of Utah

University of Utah

University of Utah

University of Utah

Paper
Supplementary Material
Code
Data

Overview Video

Abstract & Method

We present a new method to capture detailed human motion, sampling more than 1000 unique points on the body. Our method outputs highly accurate 4D (spatio-temporal) point coordinates and, crucially, automatically assigns a unique label to each of the points. The locations and unique labels of the points are inferred from individual 2D input images only, without relying on temporal tracking or any human body shape or skeletal kinematics models. Therefore, our captured point trajectories contain all of the details from the input images, including motion due to breathing, muscle contractions and flesh deformation, and are well suited to be used as training data to fit advanced models of the human body and its motion. The key idea behind our system is a new type of motion capture suit which contains a special pattern with checkerboard-like corners and two-letter codes. The images from our multi-camera system are processed by a sequence of neural networks which are trained to localize the corners and recognize the codes, while being robust to suit stretching and self-occlusions of the body. Our system relies only on standard RGB or monochrome sensors and fully passive lighting and the passive suit, making our method easy to replicate, deploy and use Our experiments demonstrate highly accurate captures of a wide variety of human poses, including challenging motions such as yoga, gymnastics, or rolling on the ground.

--> -->

BibTeX

@article{chen2021capturing,
  title={Capturing detailed deformations of moving human bodies},
  author={Chen, He and Park, Hyojoon and Macit, Kutay and Kavan, Ladislav},
  journal={ACM Transactions on Graphics (TOG)},
  volume={40},
  number={4},
  pages={1--18},
  year={2021},
  publisher={ACM New York, NY, USA}
}