Skip to content

open-mmlab/FaceShot

Repository files navigation

[ICLR 2025] FaceShot: Bring Any Character into Life

FaceShot: Bring Any Character into Life

Junyao Gao, Yanan Sun‡ *, Fei Shen, Xin Jiang, Zhening Xing, Kai Chen*, Cairong Zhao*

(* corresponding authors, project leader)

Bringing characters like Teddy Bear into life requires a bit of magic. FaceShot makes this magic a reality by introducing a training-free portrait animation framework which can animate any character from any driven video, especially for non-human characters, such as emojis and toys.

Your star is our fuel! We're revving up the engines with it!

News

  • [2025/6/26] 🔥 We release the preprocessing scripts for pre-store target images and the appearance gallery.
  • [2025/1/23] 🔥 FaceShot will be appeared in ICLR 2025!
  • [2025/1/23] 🔥 We release the code, project page and paper.

TODO List

  • (2025.06.26) Preprocessing script for pre-store target images and appearance gallery.
  • (2025.06.26) Appearance gallery.
  • Gradio demo.

Gallery

Bring Any Character into Life!!!


Toy Character

2D Anime Character

3D Anime Character

Animal Character
Check the gallery of our project page for more visual results!

Get Started

Clone the Repository

git clone https://github.com/open-mmlab/FaceShot.git
cd ./FaceShot

Environment Setup

This script has been tested on CUDA version of 12.4.

conda create -n faceshot python==3.10
conda activate faceshot
pip install -r requirements.txt
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
pip install "git+https://github.com/XPixelGroup/BasicSR.git"

Downloading Checkpoints

  1. Download the checkpoint of CMP from MOFA-Video and put it into ./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints.

  2. Download the ckpts folder from the huggingface repo which contains necessary pretrained checkpoints and put it under ./ckpts. You may use git lfs to download the entire ckpts folder.

Building Appearance Gallery

You can download pre-stored domain features from here, or create your own appearance gallery by following these steps:

  1. Place character images for a specific domain into ./characters/images/xx/, where xx represents the domain index.

  2. Run python annotation.py to annotate landmarks for the characters. Please note that for non-human characters, manual annotation is required. The landmarks will be saved in ./characters/points/xx/.

  3. Run python process_features.py to extract CLIP and diffusion features for each domain. The features will be saved in ./target_domains/.

Running Inference Scripts

chmod 777 inference.sh
./inference.sh

License and Citation

All assets and code are under the license unless specified otherwise.

If this work is helpful for your research, please consider citing the following BibTeX entry.

@article{gao2025faceshot,
  title={FaceShot: Bring Any Character into Life},
  author={Gao, Junyao and Sun, Yanan and Shen, Fei and Jiang, Xin and Xing, Zhening and Chen, Kai and Zhao, Cairong},
  journal={arXiv preprint arXiv:2503.00740},
  year={2025}
}

Acknowledgements

The code is built upon MOFA-Video and DIFT.

About

Official repo for FaceShot: Bring Any Character into Life

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published