-
Notifications
You must be signed in to change notification settings - Fork 8.2k
Error using 'On Trigger' mode #704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes this isn't implemented right now. |
Ah, good to know, thank you |
May I ask if comfyui has important update logs so that we can better understand its functional updates |
How long will it take you to finish implementing it? This issue was reported well over a year ago. The option should really be removed until it's usable. Especially if there's a chance it never will be implemented. |
bump |
bump |
Missing setting up execution order to prevent models from loading in the beginning |
It's been 2 years and it still hasn't been fixed? |
Unfortunately for us it seems the priorities lay with continually adding support for the endless flood of new tech, which is fine, but it really leaves a lot of the workflow building features like this or group nodes to the wayside, stranded as placeholders or barely functional, difficult to work with and buggy mess. Also lots of nodes that should just be merged together like all the scheduler nodes that only give you a single specific one so you have to switch connections instead of doing it within the node. |
I had one problem. A large workflow with heavy components. Such as LLM and WAN and they must be loaded STRICTLY in turn, with complete memory cleaning between stages. So, the nodes of the second stage sometimes loaded untimely first and took up memory. This could easily be fixed with the help of the Trigger input - universal for all nodes, but such an input simply does not work. And there are simply no other inputs in the loader node - there is nothing to hook it to. And the memory management is terrible. It is unclear and impossible to see what is loaded at a particular moment in time and how much garbage is in all this. Over time, the memory is filled with garbage and all sorts of memory cleaning nodes, vacuum cleaners - simply do not work, do not clean completely, only restart helps. |
I am having the same issue when the large model needs to be loaded first to ensure it is in VRAM. Got no luck with custom tools like Execution Order Controller from Impact Pack because there are no inputs on model loader nodes :( As bigger models like Flux D and Hunyuan video are out even 24GB/32GB VRAM on top model video cards is not enough -- need better memory management tools. |
I'm trying to force one parallel chain of nodes to execute before another by using the 'On Trigger' mode to initiate the second chain after finishing the first one. All I'm doing is connecting 'OnExecuted' of the last node in the first chain to 'OnTrigger' of the first node in the second chain. At first, all goes well and the chains start executing in the desired order, but when it gets to the node with 'OnTrigger' it throws this:
My actual workflow is way too complicated for this, but this here has the same problem:

The text was updated successfully, but these errors were encountered: