Skip to content
/ WHAC Public

[ECCV 2024] Official Code for "WHAC: World-grounded Humans and Cameras"

License

Notifications You must be signed in to change notification settings

SMPLCap/WHAC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WHAC: World-grounded Humans and Cameras

This is the official implementation of WHAC: World-grounded Humans and Cameras (ECCV 2024), featuring the latest foundation model for human pose and shape estimation with SMPLest-X.

[Homepage]      [Paper]      [arXiv]      [SMPLest-X]      [SMPLer-X]     


Installation

Prepare the environment

git clone https://github.com/wqyin/WHAC.git --recursive
cd WHAC

bash scripts/installation.sh

Download the pretrained model for WHAC

  • Download the whac_motion_velocimeter.pth.tar from here and place it under ./pretrained_models.

Setup SMPLest-X

  • Prepare the pretrained models and parametric human models for SMPLest-X following the official instructions here.
  • Make sure the file structure under ./third_party/SMPLest-X is correct.

Setup DPVO

  • Setup steps for DPVO are included in ./scripts/installation.sh.
  • Refer to the Setup and Installation section if there is any issue during the installation.

File structure

.
├── assets
├── configs
├── demo
├── lib
├── outputs
├── pretrained_models
│   └── whac_motion_velocimeter.pth.tar 
├── scripts
├── third_party
│   ├── DPVO
│   │   └── pretrained_models
│   │       └── dpvo.pth
│   └── SMPLest-X
│       ├── pretrained_models
│       │   └── smplest_x_h40
│       │       ├── smplest_x_h40.pth.tar
│       │       └── config_base.py
│       └── human_models
│           └── human_model_files
├── whac
├── README.md
└── requirements.txt

Inference

  • Place the video under ./demo folder.
bash scripts/inference.sh {SEQ_NAME}
  • You may quick run the demo with our test videos by:
bash scripts/prepare_demo.sh

bash scripts/inference.sh dance_demo.mp4
bash scripts/inference.sh skateboard_demo.mp4

WHAC-A-Mole

Check out our homepage for dataset download links.

whac_a_mole_video_small.mp4

Citation

@inproceedings{yin2024whac,
  title={Whac: World-grounded humans and cameras},
  author={Yin, Wanqi and Cai, Zhongang and Wang, Ruisi and Wang, Fanzhou and Wei, Chen and Mei, Haiyi and Xiao, Weiye and Yang, Zhitao and Sun, Qingping and Yamashita, Atsushi and Yang, Lei and Liu, Ziwei},
  booktitle={European Conference on Computer Vision},
  pages={20--37},
  year={2024},
  organization={Springer}
}
@article{yin2025smplest,
  title={SMPLest-X: Ultimate Scaling for Expressive Human Pose and Shape Estimation},
  author={Yin, Wanqi and Cai, Zhongang and Wang, Ruisi and Zeng, Ailing and Wei, Chen and Sun, Qingping and Mei, Haiyi and Wang, Yanjun and Pang, Hui En and Zhang, Mingyuan and Zhang, Lei and Loy, Chen Change and Yamashita, Atsushi and Yang, Lei and Liu, Ziwei},
  journal={arXiv preprint arXiv:2501.09782},
  year={2025}
}

Explore More SMPLCap Projects

  • [arXiv'25] SMPLest-X: An extended version of SMPLer-X with stronger foundation models.
  • [ECCV'24] WHAC: World-grounded human pose and camera estimation from monocular videos.
  • [CVPR'24] AiOS: An all-in-one-stage pipeline combining detection and 3D human reconstruction.
  • [NeurIPS'23] SMPLer-X: Scaling up EHPS towards a family of generalist foundation models.
  • [NeurIPS'23] RoboSMPLX: A framework to enhance the robustness of whole-body pose and shape estimation.
  • [ICCV'23] Zolly: 3D human mesh reconstruction from perspective-distorted images.
  • [arXiv'23] PointHPS: 3D HPS from point clouds captured in real-world settings.
  • [NeurIPS'22] HMR-Benchmarks: A comprehensive benchmark of HPS datasets, backbones, and training strategies.

About

[ECCV 2024] Official Code for "WHAC: World-grounded Humans and Cameras"

Resources

License

Stars

Watchers

Forks