Skip to content

Commit d3c43cd

Browse files
authored
Merge branch 'main' into tijmen-add-encode-decode-example
2 parents fec645a + 2f464ae commit d3c43cd

File tree

12 files changed

+663
-89
lines changed

12 files changed

+663
-89
lines changed

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,9 @@ python3 generate.py --gpt-ckpt-path model_weights/gpt.safetensors --shape-ckpt-p
3838
The output will be an `.obj` file saved in the specified `output` directory.
3939

4040
If you want to render a turntable gif of the mesh, you can use the `--render-gif` flag, which will render a turntable gif of the mesh
41-
and save it as `turntable.gif` in the specified `output` directory. Note that you must have blender installed and in your path to render the turntable gif.
41+
and save it as `turntable.gif` in the specified `output` directory.
4242

43+
> **Note**: You must have Blender installed and available in your system's PATH to render the turntable GIF. You can download it from [Blender's official website](https://www.blender.org/). Ensure that the Blender executable is accessible from the command line.
4344
4445
### Shaple tokenization and de-tokenization
4546
To tokenize a 3D shape into token indices and reconstruct it back, you can use the following command:
@@ -48,4 +49,4 @@ To tokenize a 3D shape into token indices and reconstruct it back, you can use t
4849
python3 vq_vae_encode_decode.py --shape-ckpt-path model_weights/shape.safetensors --mesh-path ./outputs/output.obj
4950
```
5051

51-
This will process the `.obj` file located at `./outputs/output.obj` and prints the tokenized representation as well as exports the mesh reconstructed from the token indices.
52+
This will process the `.obj` file located at `./outputs/output.obj` and prints the tokenized representation as well as exports the mesh reconstructed from the token indices.

configs/open_model.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,6 @@ shape_model:
2828
num_encoder_layers: 13
2929
encoder_cross_attention_levels: [0, 2, 4, 8]
3030
num_decoder_layers: 24
31-
dropout: 0.0
3231
num_codes: 16384
3332

3433
text_model_pretrained_model_name_or_path: "openai/clip-vit-large-patch14"

0 commit comments

Comments
 (0)