Skip to content

[enhancement]: support lodestones/Chroma #7986

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 of 10 tasks
keturn opened this issue May 3, 2025 · 2 comments
Open
1 of 10 tasks

[enhancement]: support lodestones/Chroma #7986

keturn opened this issue May 3, 2025 · 2 comments
Labels
enhancement New feature or request

Comments

@keturn
Copy link
Contributor

keturn commented May 3, 2025

Is there an existing issue for this?

  • I have searched the existing issues

What should this feature add?

https://huggingface.co/lodestones/Chroma is derived from FLUX.1 [Schnell] with some changes to the model architecture making it about 25% smaller. It retains the Apache license.

Additional Content

Currently the model manager is willing to install a Chroma GGUF, but running it fails like

Missing key(s) in state_dict: "time_in.in_layer.weight", […],
Unexpected key(s) in state_dict: "distilled_guidance_layer.in_proj.bias", […]

Checklist

Based on my experience with making a third-party node for Chroma, here are some things to keep in mind for a full integration:

  • Uses CFG and not FLUX Guidance.
  • Does not use CLIP. (You can pass CLIP to it, because of its history from Schnell, but its not being trained with CLIP and is usable without it.)
  • T5 encoder should not be either padded or cropped to 256/512 tokens…
    • …but it's often useful to put padding tokens on the negative prompt, especially if it's not large on its own.
  • Should be able to use FLUX LoRA, with some tolerance as in feat(LoRA): allow LoRA layer patcher to continue past unknown layers #8059.
  • Should also be able to use Chroma-specific LoRA.
  • torch.compile is much more worthwhile than it was with SDXL. (Then this is likely true of FLUX too? Haven't checked.)
  • Can use FLUX Redux. (But Flex.1 Redux looks pretty bad.)
  • ControlNet - TBD
@keturn keturn added the enhancement New feature or request label May 3, 2025
@keturn
Copy link
Contributor Author

keturn commented May 4, 2025

I was hoping that after I got the model loaded, we could use Invoke's existing Flux denoise loop. After some exploration, I've found it's not that easy—Chroma and Invoke's implementations have diverged in different ways from their Flux ancestry.

Invoke has added various different ControlNets and Redux and whatnot, while Chroma's forward function takes an additional token masking parameter. So even a minimal implementation that ignores controlnets doesn't fit.

@keturn
Copy link
Contributor Author

keturn commented May 7, 2025

I've got an attempted node implementation at https://gitlab.com/keturn/chroma_invoke but it really needs more eyes on it.

It loads—either from safetensors or GGUF now—it runs, but it's like it's not de-noising at all.

Update: It's working well with most features implemented Invoke expects from a denoising node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant