Skip to content

uci-cbcl/CoMA

 
 

Repository files navigation

CoMA: Compositional Human Motion Generation with Multi-modal Agents

Project Page arXiv

CoMA: Compositional Human Motion Generation with Multi-modal Agents

Shanlin Sun*, Gabriel De Araujo*, Jiaqi Xu*, Shenghan Zhou*, Hanwen Zhang, Ziheng Huang, Chenyu You and Xiaohui Xie

(* Equal Contribution)

  • Presented by University of California, Irvine; Southeast University; Chongqing University; Huazhong University of Science and Technology; Northeastern University; Stony Brook University
  • 📬 Primary contact: Shanlin Sun ( [email protected] )

Highlights

🌟 CoMA, a compositional human motion generation framework with multi-modal agents.

🌟 CoMA can generate high-quality motion sequences given long, complex and context-rich text prompts.

📰 News

📝 TODO List

  • Release CoMA full implementation.
  • Release MVC training code.
  • Release SPAM training code.
  • Release MVC inference code and checkpoints.
  • Release SPAM inference code and checkpoints.

About

CoMA: Compositional Human Motion Generation with Multi-modal Agents

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%