Skip to content

Error using 'On Trigger' mode #704

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Bocian-1 opened this issue May 28, 2023 · 11 comments · May be fixed by #7443
Open

Error using 'On Trigger' mode #704

Bocian-1 opened this issue May 28, 2023 · 11 comments · May be fixed by #7443
Labels
User Support A user needs help with something, probably not a bug.

Comments

@Bocian-1
Copy link

Bocian-1 commented May 28, 2023

I'm trying to force one parallel chain of nodes to execute before another by using the 'On Trigger' mode to initiate the second chain after finishing the first one. All I'm doing is connecting 'OnExecuted' of the last node in the first chain to 'OnTrigger' of the first node in the second chain. At first, all goes well and the chains start executing in the desired order, but when it gets to the node with 'OnTrigger' it throws this:

!!! Exception during processing !!!
Traceback (most recent call last):
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 135, in recursive_execute
    input_data_all = get_input_data(inputs, class_def, unique_id, outputs, prompt, extra_data)
  File "C:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 26, in get_input_data
    obj = outputs[input_unique_id][output_index]
IndexError: list index out of range

My actual workflow is way too complicated for this, but this here has the same problem:
image

@comfyanonymous
Copy link
Owner

Yes this isn't implemented right now.

@Bocian-1
Copy link
Author

Ah, good to know, thank you

@wpsfuyun
Copy link

Yes this isn't implemented right now.

May I ask if comfyui has important update logs so that we can better understand its functional updates

@Zolxys
Copy link

Zolxys commented Aug 15, 2024

How long will it take you to finish implementing it? This issue was reported well over a year ago.

The option should really be removed until it's usable. Especially if there's a chance it never will be implemented.

@M4R5W0N6
Copy link

bump

@mcmonkey4eva mcmonkey4eva added the User Support A user needs help with something, probably not a bug. label Sep 19, 2024
@HECer
Copy link

HECer commented Mar 16, 2025

bump

@HECer
Copy link

HECer commented Mar 16, 2025

Missing setting up execution order to prevent models from loading in the beginning

@KLL535
Copy link

KLL535 commented Mar 20, 2025

It's been 2 years and it still hasn't been fixed?

@Bocian-1
Copy link
Author

It's been 2 years and it still hasn't been fixed?

Unfortunately for us it seems the priorities lay with continually adding support for the endless flood of new tech, which is fine, but it really leaves a lot of the workflow building features like this or group nodes to the wayside, stranded as placeholders or barely functional, difficult to work with and buggy mess. Also lots of nodes that should just be merged together like all the scheduler nodes that only give you a single specific one so you have to switch connections instead of doing it within the node.
I may be in a minority to care about those features but I want to make some really complex workflows required for very curated results you can't get out of a prompt and upscaler alone but doing anything beyond that in a remotely usable way with the current tooling has been immensly painful and annoying. Updates breaking things occasionally doesn't help though I'd be fine with it when it improves things if it didn't take a few days to repair just one of my workflows at times

@KLL535
Copy link

KLL535 commented Mar 25, 2025

I had one problem. A large workflow with heavy components. Such as LLM and WAN and they must be loaded STRICTLY in turn, with complete memory cleaning between stages. So, the nodes of the second stage sometimes loaded untimely first and took up memory. This could easily be fixed with the help of the Trigger input - universal for all nodes, but such an input simply does not work. And there are simply no other inputs in the loader node - there is nothing to hook it to.

And the memory management is terrible. It is unclear and impossible to see what is loaded at a particular moment in time and how much garbage is in all this. Over time, the memory is filled with garbage and all sorts of memory cleaning nodes, vacuum cleaners - simply do not work, do not clean completely, only restart helps.

@brdann
Copy link

brdann commented Mar 29, 2025

I am having the same issue when the large model needs to be loaded first to ensure it is in VRAM. Got no luck with custom tools like Execution Order Controller from Impact Pack because there are no inputs on model loader nodes :( As bigger models like Flux D and Hunyuan video are out even 24GB/32GB VRAM on top model video cards is not enough -- need better memory management tools.

@akionux akionux linked a pull request Apr 3, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants