Description
Is there an existing issue for this problem?
- I have searched the existing issues
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
RTX 3090
GPU VRAM
24 gb
Version number
3.11
Browser
Google Chrome
Python dependencies
No response
What happened
With the update, for each change, vae, encoder and the model start loading. This takes a lot of time. That is, the same image, you cancel the changes and start a new generation, and each time they load. This takes a long time, sometimes even five minutes. It was not like this before.
[2025-05-14 13:39:51,244]::[InvokeAI]::INFO --> Executing queue item 6818, session b624bd56-c776-4575-92cf-b8048d8d1688
[2025-05-14 13:39:51,274]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:03<01:16, 3.06s/it]
8%|7 | 2/26 [00:06<01:13, 3.07s/it]
12%|#1 | 3/26 [00:09<01:10, 3.07s/it]
15%|#5 | 4/26 [00:12<01:07, 3.08s/it]
19%|#9 | 5/26 [00:15<01:04, 3.08s/it]
23%|##3 | 6/26 [00:18<01:01, 3.09s/it]
27%|##6 | 7/26 [00:21<00:58, 3.09s/it]
31%|### | 8/26 [00:24<00:55, 3.09s/it]
35%|###4 | 9/26 [00:27<00:52, 3.09s/it]
38%|###8 | 10/26 [00:30<00:49, 3.09s/it]
42%|####2 | 11/26 [00:33<00:46, 3.09s/it]
46%|####6 | 12/26 [00:37<00:43, 3.09s/it]
50%|##### | 13/26 [00:40<00:40, 3.09s/it]
54%|#####3 | 14/26 [00:43<00:37, 3.09s/it]
58%|#####7 | 15/26 [00:46<00:34, 3.09s/it]
62%|######1 | 16/26 [00:49<00:30, 3.09s/it]
65%|######5 | 17/26 [00:52<00:27, 3.09s/it]
69%|######9 | 18/26 [00:55<00:24, 3.09s/it]
73%|#######3 | 19/26 [00:58<00:21, 3.09s/it]
77%|#######6 | 20/26 [01:01<00:18, 3.09s/it]
81%|######## | 21/26 [01:04<00:15, 3.09s/it]
85%|########4 | 22/26 [01:07<00:12, 3.09s/it]
88%|########8 | 23/26 [01:11<00:09, 3.09s/it]
92%|#########2| 24/26 [01:14<00:06, 3.10s/it]
96%|#########6| 25/26 [01:17<00:03, 3.10s/it]
100%|##########| 26/26 [01:20<00:00, 3.10s/it]
100%|##########| 26/26 [01:20<00:00, 3.09s/it]
[2025-05-14 13:41:11,987]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:41:13,097]::[InvokeAI]::INFO --> Graph stats: b624bd56-c776-4575-92cf-b8048d8d1688
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
flux_vae_encode 1 0.001s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.000s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.000s 9.642G
core_metadata 1 0.000s 9.642G
tomask 1 0.000s 9.642G
create_gradient_mask 1 0.000s 9.642G
flux_denoise 1 80.714s 10.555G
flux_vae_decode 1 0.888s 13.225G
expand_mask_with_fade 1 0.001s 9.642G
apply_mask_to_image 1 0.221s 9.642G
TOTAL GRAPH EXECUTION TIME: 81.825s
TOTAL GRAPH WALL TIME: 81.829s
RAM used by InvokeAI process: 6.87G (-1.149G)
RAM used to load models: 5.72G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 13:41:13,116]::[InvokeAI]::INFO --> Executing queue item 6819, session e0d70a5d-6891-4bd7-bf02-2627fab93f5e
[2025-05-14 13:41:13,148]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:03<01:17, 3.09s/it]
8%|7 | 2/26 [00:06<01:14, 3.10s/it]
12%|#1 | 3/26 [00:09<01:11, 3.10s/it]
15%|#5 | 4/26 [00:12<01:08, 3.10s/it]
19%|#9 | 5/26 [00:15<01:05, 3.10s/it]
23%|##3 | 6/26 [00:18<01:02, 3.10s/it]
27%|##6 | 7/26 [00:21<00:58, 3.10s/it]
31%|### | 8/26 [00:24<00:55, 3.10s/it]
35%|###4 | 9/26 [00:27<00:52, 3.10s/it]
38%|###8 | 10/26 [00:30<00:49, 3.10s/it]
42%|####2 | 11/26 [00:34<00:46, 3.10s/it]
46%|####6 | 12/26 [00:37<00:43, 3.09s/it]
50%|##### | 13/26 [00:40<00:40, 3.10s/it]
54%|#####3 | 14/26 [00:43<00:37, 3.10s/it]
58%|#####7 | 15/26 [00:46<00:34, 3.10s/it]
62%|######1 | 16/26 [00:49<00:30, 3.10s/it]
65%|######5 | 17/26 [00:52<00:27, 3.10s/it]
69%|######9 | 18/26 [00:55<00:24, 3.10s/it]
73%|#######3 | 19/26 [00:58<00:21, 3.09s/it]
77%|#######6 | 20/26 [01:01<00:18, 3.06s/it]
81%|######## | 21/26 [01:04<00:14, 3.00s/it]
85%|########4 | 22/26 [01:07<00:11, 2.98s/it]
88%|########8 | 23/26 [01:10<00:08, 2.92s/it]
92%|#########2| 24/26 [01:13<00:05, 2.84s/it]
96%|#########6| 25/26 [01:15<00:02, 2.84s/it]
100%|##########| 26/26 [01:18<00:00, 2.79s/it]
100%|##########| 26/26 [01:18<00:00, 3.02s/it]
[2025-05-14 13:42:31,795]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:42:33,250]::[InvokeAI]::INFO --> Graph stats: e0d70a5d-6891-4bd7-bf02-2627fab93f5e
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
flux_vae_encode 1 0.000s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.000s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.000s 9.642G
core_metadata 1 0.001s 9.642G
tomask 1 0.000s 9.642G
create_gradient_mask 1 0.000s 9.642G
expand_mask_with_fade 1 0.000s 9.642G
flux_denoise 1 78.654s 10.555G
flux_vae_decode 1 1.219s 13.225G
apply_mask_to_image 1 0.231s 9.642G
TOTAL GRAPH EXECUTION TIME: 80.105s
TOTAL GRAPH WALL TIME: 80.109s
RAM used by InvokeAI process: 6.87G (-0.003G)
RAM used to load models: 5.72G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 13:44:17,874]::[InvokeAI]::INFO --> Executing queue item 6820, session c83f5df0-8af1-4869-8e4d-0aadb6f023a7
[2025-05-14 13:44:17,901]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'H:\INVOKE\models.download_cache\https__github.com_sanster_models_releases_download_add_big_lama_big-lama.pt\big-lama.pt' (RecursiveScriptModule) onto cuda device in 0.00s. Total model size: 194.77MB, VRAM: 194.77MB (100.0%)
[2025-05-14 13:44:18,621]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.02s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:44:19,468]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.01s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:03<01:15, 3.02s/it]
8%|7 | 2/26 [00:06<01:12, 3.01s/it]
12%|#1 | 3/26 [00:09<01:09, 3.01s/it]
15%|#5 | 4/26 [00:12<01:06, 3.02s/it]
19%|#9 | 5/26 [00:15<01:03, 3.03s/it]
23%|##3 | 6/26 [00:18<01:00, 3.04s/it]
27%|##6 | 7/26 [00:21<00:57, 3.05s/it]
31%|### | 8/26 [00:24<00:54, 3.06s/it]
35%|###4 | 9/26 [00:27<00:52, 3.06s/it]
38%|###8 | 10/26 [00:30<00:49, 3.06s/it]
42%|####2 | 11/26 [00:33<00:45, 3.07s/it]
46%|####6 | 12/26 [00:36<00:42, 3.07s/it]
50%|##### | 13/26 [00:39<00:39, 3.07s/it]
54%|#####3 | 14/26 [00:42<00:36, 3.07s/it]
58%|#####7 | 15/26 [00:45<00:33, 3.07s/it]
62%|######1 | 16/26 [00:48<00:30, 3.07s/it]
65%|######5 | 17/26 [00:51<00:27, 3.07s/it]
69%|######9 | 18/26 [00:55<00:24, 3.07s/it]
73%|#######3 | 19/26 [00:58<00:21, 3.07s/it]
77%|#######6 | 20/26 [01:01<00:18, 3.07s/it]
81%|######## | 21/26 [01:04<00:15, 3.07s/it]
85%|########4 | 22/26 [01:07<00:12, 3.07s/it]
88%|########8 | 23/26 [01:10<00:09, 3.07s/it]
92%|#########2| 24/26 [01:13<00:06, 3.07s/it]
96%|#########6| 25/26 [01:16<00:03, 3.08s/it]
100%|##########| 26/26 [01:19<00:00, 3.08s/it]
100%|##########| 26/26 [01:19<00:00, 3.06s/it]
[2025-05-14 13:45:39,150]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:45:40,243]::[InvokeAI]::INFO --> Graph stats: c83f5df0-8af1-4869-8e4d-0aadb6f023a7
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.002s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.000s 9.642G
core_metadata 1 0.001s 9.642G
infill_lama 1 0.693s 10.940G
flux_vae_encode 1 0.504s 12.181G
tomask 2 0.022s 9.642G
mask_combine 1 0.067s 9.642G
create_gradient_mask 1 0.260s 9.642G
flux_denoise 1 79.696s 10.555G
flux_vae_decode 1 0.788s 13.225G
expand_mask_with_fade 1 0.061s 9.642G
apply_mask_to_image 1 0.242s 9.642G
TOTAL GRAPH EXECUTION TIME: 82.336s
TOTAL GRAPH WALL TIME: 82.341s
RAM used by InvokeAI process: 6.74G (-0.144G)
RAM used to load models: 5.91G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 5
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 13:45:40,259]::[InvokeAI]::INFO --> Executing queue item 6821, session bdd262cd-f544-4106-8f74-a65c804db69f
[2025-05-14 13:45:40,293]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:03<01:16, 3.08s/it]
8%|7 | 2/26 [00:06<01:13, 3.08s/it]
12%|#1 | 3/26 [00:09<01:10, 3.08s/it]
15%|#5 | 4/26 [00:12<01:07, 3.08s/it]
19%|#9 | 5/26 [00:15<01:04, 3.08s/it]
23%|##3 | 6/26 [00:18<01:01, 3.08s/it]
27%|##6 | 7/26 [00:21<00:58, 3.09s/it]
31%|### | 8/26 [00:24<00:55, 3.09s/it]
35%|###4 | 9/26 [00:27<00:52, 3.09s/it]
38%|###8 | 10/26 [00:30<00:49, 3.09s/it]
42%|####2 | 11/26 [00:33<00:46, 3.10s/it]
46%|####6 | 12/26 [00:37<00:43, 3.10s/it]
50%|##### | 13/26 [00:40<00:40, 3.10s/it]
54%|#####3 | 14/26 [00:43<00:37, 3.10s/it]
58%|#####7 | 15/26 [00:46<00:34, 3.10s/it]
62%|######1 | 16/26 [00:49<00:31, 3.11s/it]
65%|######5 | 17/26 [00:52<00:27, 3.11s/it]
69%|######9 | 18/26 [00:55<00:24, 3.11s/it]
73%|#######3 | 19/26 [00:58<00:21, 3.11s/it]
77%|#######6 | 20/26 [01:01<00:18, 3.11s/it]
81%|######## | 21/26 [01:05<00:15, 3.12s/it]
85%|########4 | 22/26 [01:08<00:12, 3.11s/it]
88%|########8 | 23/26 [01:11<00:09, 3.11s/it]
92%|#########2| 24/26 [01:14<00:06, 3.10s/it]
96%|#########6| 25/26 [01:17<00:03, 3.05s/it]
100%|##########| 26/26 [01:20<00:00, 2.99s/it]
100%|##########| 26/26 [01:20<00:00, 3.08s/it]
[2025-05-14 13:47:00,513]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:47:01,580]::[InvokeAI]::INFO --> Graph stats: bdd262cd-f544-4106-8f74-a65c804db69f
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.000s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.000s 9.642G
core_metadata 1 0.000s 9.642G
infill_lama 1 0.000s 9.642G
flux_vae_encode 1 0.000s 9.642G
tomask 2 0.001s 9.642G
mask_combine 1 0.000s 9.642G
create_gradient_mask 1 0.000s 9.642G
flux_denoise 1 80.225s 10.555G
flux_vae_decode 1 0.828s 13.225G
expand_mask_with_fade 1 0.000s 9.642G
apply_mask_to_image 1 0.235s 9.642G
TOTAL GRAPH EXECUTION TIME: 81.289s
TOTAL GRAPH WALL TIME: 81.294s
RAM used by InvokeAI process: 6.74G (+0.007G)
RAM used to load models: 5.72G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 13:47:01,597]::[InvokeAI]::INFO --> Executing queue item 6822, session 281e95a8-ba98-4fed-a774-5a48e476908d
[2025-05-14 13:47:01,631]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:02<01:12, 2.90s/it]
8%|7 | 2/26 [00:05<01:09, 2.88s/it]
12%|#1 | 3/26 [00:08<01:06, 2.91s/it]
15%|#5 | 4/26 [00:11<01:02, 2.86s/it]
19%|#9 | 5/26 [00:14<00:58, 2.79s/it]
23%|##3 | 6/26 [00:16<00:56, 2.80s/it]
27%|##6 | 7/26 [00:19<00:52, 2.79s/it]
31%|### | 8/26 [00:22<00:50, 2.78s/it]
35%|###4 | 9/26 [00:25<00:46, 2.76s/it]
38%|###8 | 10/26 [00:28<00:44, 2.78s/it]
42%|####2 | 11/26 [00:30<00:41, 2.76s/it]
46%|####6 | 12/26 [00:33<00:38, 2.76s/it]
50%|##### | 13/26 [00:36<00:35, 2.76s/it]
54%|#####3 | 14/26 [00:39<00:33, 2.78s/it]
58%|#####7 | 15/26 [00:42<00:31, 2.86s/it]
58%|#####7 | 15/26 [00:42<00:31, 2.83s/it]
[2025-05-14 13:47:44,134]::[InvokeAI]::INFO --> Graph stats: 281e95a8-ba98-4fed-a774-5a48e476908d
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.000s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.001s 9.642G
core_metadata 1 0.000s 9.642G
infill_lama 1 0.000s 9.642G
flux_vae_encode 1 0.000s 9.642G
tomask 2 0.000s 9.642G
mask_combine 1 0.001s 9.642G
create_gradient_mask 1 0.000s 9.642G
flux_denoise 1 42.513s 10.555G
TOTAL GRAPH EXECUTION TIME: 42.515s
TOTAL GRAPH WALL TIME: 42.517s
RAM used by InvokeAI process: 6.76G (+0.013G)
RAM used to load models: 5.56G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 2
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 13:48:07,362]::[InvokeAI]::INFO --> Executing queue item 6823, session 3447c858-f0ad-45b1-bff1-b606976b84b7
[2025-05-14 13:48:07,403]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'H:\INVOKE\models.download_cache\https__github.com_sanster_models_releases_download_add_big_lama_big-lama.pt\big-lama.pt' (RecursiveScriptModule) onto cuda device in 0.02s. Total model size: 194.77MB, VRAM: 194.77MB (100.0%)
[2025-05-14 13:48:08,155]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.02s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:48:08,854]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.02s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:03<01:16, 3.05s/it]
8%|7 | 2/26 [00:06<01:13, 3.05s/it]
12%|#1 | 3/26 [00:09<01:10, 3.05s/it]
15%|#5 | 4/26 [00:11<01:04, 2.91s/it]
19%|#9 | 5/26 [00:14<00:59, 2.83s/it]
23%|##3 | 6/26 [00:17<00:55, 2.77s/it]
27%|##6 | 7/26 [00:19<00:52, 2.76s/it]
31%|### | 8/26 [00:22<00:49, 2.73s/it]
35%|###4 | 9/26 [00:25<00:45, 2.70s/it]
38%|###8 | 10/26 [00:27<00:42, 2.69s/it]
42%|####2 | 11/26 [00:30<00:40, 2.67s/it]
46%|####6 | 12/26 [00:33<00:37, 2.66s/it]
50%|##### | 13/26 [00:35<00:34, 2.66s/it]
54%|#####3 | 14/26 [00:38<00:31, 2.65s/it]
58%|#####7 | 15/26 [00:41<00:29, 2.69s/it]
62%|######1 | 16/26 [00:43<00:26, 2.68s/it]
65%|######5 | 17/26 [00:46<00:24, 2.67s/it]
69%|######9 | 18/26 [00:49<00:21, 2.66s/it]
73%|#######3 | 19/26 [00:51<00:18, 2.66s/it]
77%|#######6 | 20/26 [00:54<00:15, 2.65s/it]
81%|######## | 21/26 [00:57<00:13, 2.65s/it]
85%|########4 | 22/26 [00:59<00:10, 2.67s/it]
88%|########8 | 23/26 [01:02<00:08, 2.69s/it]
92%|#########2| 24/26 [01:05<00:05, 2.67s/it]
96%|#########6| 25/26 [01:07<00:02, 2.67s/it]
100%|##########| 26/26 [01:10<00:00, 2.66s/it]
100%|##########| 26/26 [01:10<00:00, 2.71s/it]
[2025-05-14 13:49:19,447]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:49:20,530]::[InvokeAI]::INFO --> Graph stats: 3447c858-f0ad-45b1-bff1-b606976b84b7
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.000s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.000s 9.642G
core_metadata 1 0.001s 9.642G
infill_lama 1 0.740s 10.940G
flux_vae_encode 1 0.401s 12.181G
tomask 2 0.023s 9.642G
mask_combine 1 0.021s 9.642G
create_gradient_mask 1 0.254s 9.642G
flux_denoise 1 70.610s 10.555G
flux_vae_decode 1 0.784s 13.225G
expand_mask_with_fade 1 0.049s 9.642G
apply_mask_to_image 1 0.247s 9.642G
TOTAL GRAPH EXECUTION TIME: 73.129s
TOTAL GRAPH WALL TIME: 73.138s
RAM used by InvokeAI process: 6.74G (-0.028G)
RAM used to load models: 5.91G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 5
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 13:49:20,548]::[InvokeAI]::INFO --> Executing queue item 6824, session 89174453-90a3-4c2f-ab85-c3fc5c65a1be
[2025-05-14 13:49:20,590]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:02<01:06, 2.67s/it]
8%|7 | 2/26 [00:05<01:04, 2.68s/it]
12%|#1 | 3/26 [00:08<01:02, 2.71s/it]
15%|#5 | 4/26 [00:10<01:00, 2.73s/it]
19%|#9 | 5/26 [00:13<00:57, 2.76s/it]
23%|##3 | 6/26 [00:16<00:57, 2.87s/it]
27%|##6 | 7/26 [00:19<00:55, 2.94s/it]
31%|### | 8/26 [00:22<00:53, 2.98s/it]
35%|###4 | 9/26 [00:25<00:49, 2.90s/it]
38%|###8 | 10/26 [00:28<00:45, 2.85s/it]
42%|####2 | 11/26 [00:31<00:42, 2.81s/it]
46%|####6 | 12/26 [00:33<00:38, 2.76s/it]
50%|##### | 13/26 [00:36<00:35, 2.74s/it]
54%|#####3 | 14/26 [00:39<00:32, 2.73s/it]
58%|#####7 | 15/26 [00:41<00:29, 2.71s/it]
62%|######1 | 16/26 [00:44<00:26, 2.69s/it]
65%|######5 | 17/26 [00:47<00:24, 2.68s/it]
69%|######9 | 18/26 [00:49<00:21, 2.71s/it]
73%|#######3 | 19/26 [00:52<00:18, 2.69s/it]
77%|#######6 | 20/26 [00:55<00:16, 2.68s/it]
81%|######## | 21/26 [00:57<00:13, 2.67s/it]
85%|########4 | 22/26 [01:00<00:10, 2.66s/it]
88%|########8 | 23/26 [01:03<00:07, 2.66s/it]
92%|#########2| 24/26 [01:05<00:05, 2.68s/it]
96%|#########6| 25/26 [01:08<00:02, 2.70s/it]
100%|##########| 26/26 [01:11<00:00, 2.80s/it]
100%|##########| 26/26 [01:11<00:00, 2.76s/it]
[2025-05-14 13:50:32,629]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:50:33,776]::[InvokeAI]::INFO --> Graph stats: 89174453-90a3-4c2f-ab85-c3fc5c65a1be
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.000s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.001s 9.642G
core_metadata 1 0.001s 9.642G
infill_lama 1 0.000s 9.642G
flux_vae_encode 1 0.000s 9.642G
tomask 2 0.000s 9.642G
mask_combine 1 0.000s 9.642G
create_gradient_mask 1 0.000s 9.642G
flux_denoise 1 72.043s 10.555G
flux_vae_decode 1 0.875s 13.225G
expand_mask_with_fade 1 0.000s 9.642G
apply_mask_to_image 1 0.269s 9.642G
TOTAL GRAPH EXECUTION TIME: 73.188s
TOTAL GRAPH WALL TIME: 73.195s
RAM used by InvokeAI process: 6.74G (+0.003G)
RAM used to load models: 5.72G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 13:50:33,799]::[InvokeAI]::INFO --> Executing queue item 6825, session dcebe8fe-b1f4-4a83-926a-5eab45d44b32
[2025-05-14 13:50:33,840]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/26 [00:00<?, ?it/s]
4%|3 | 1/26 [00:03<01:17, 3.09s/it]
8%|7 | 2/26 [00:06<01:14, 3.09s/it]
12%|#1 | 3/26 [00:09<01:10, 3.07s/it]
15%|#5 | 4/26 [00:12<01:06, 3.01s/it]
19%|#9 | 5/26 [00:14<01:02, 2.95s/it]
23%|##3 | 6/26 [00:17<00:59, 2.95s/it]
27%|##6 | 7/26 [00:20<00:55, 2.93s/it]
31%|### | 8/26 [00:23<00:52, 2.92s/it]
35%|###4 | 9/26 [00:26<00:49, 2.89s/it]
38%|###8 | 10/26 [00:29<00:45, 2.82s/it]
42%|####2 | 11/26 [00:31<00:42, 2.80s/it]
46%|####6 | 12/26 [00:34<00:39, 2.85s/it]
50%|##### | 13/26 [00:37<00:37, 2.89s/it]
54%|#####3 | 14/26 [00:40<00:34, 2.90s/it]
58%|#####7 | 15/26 [00:43<00:31, 2.86s/it]
62%|######1 | 16/26 [00:46<00:28, 2.85s/it]
65%|######5 | 17/26 [00:49<00:25, 2.86s/it]
69%|######9 | 18/26 [00:52<00:22, 2.81s/it]
73%|#######3 | 19/26 [00:54<00:19, 2.77s/it]
77%|#######6 | 20/26 [00:57<00:16, 2.79s/it]
81%|######## | 21/26 [01:00<00:13, 2.76s/it]
85%|########4 | 22/26 [01:02<00:11, 2.75s/it]
88%|########8 | 23/26 [01:05<00:08, 2.73s/it]
92%|#########2| 24/26 [01:08<00:05, 2.77s/it]
96%|#########6| 25/26 [01:11<00:02, 2.76s/it]
100%|##########| 26/26 [01:13<00:00, 2.75s/it]
100%|##########| 26/26 [01:13<00:00, 2.84s/it]
[2025-05-14 13:51:47,858]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 13:51:48,890]::[InvokeAI]::INFO --> Graph stats: dcebe8fe-b1f4-4a83-926a-5eab45d44b32
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 9.642G
lora_selector 1 0.000s 9.642G
collect 2 0.000s 9.642G
flux_lora_collection_loader 1 0.000s 9.642G
flux_text_encoder 1 0.000s 9.642G
core_metadata 1 0.001s 9.642G
infill_lama 1 0.001s 9.642G
flux_vae_encode 1 0.001s 9.642G
tomask 2 0.001s 9.642G
mask_combine 1 0.000s 9.642G
create_gradient_mask 1 0.000s 9.642G
flux_denoise 1 74.023s 10.555G
flux_vae_decode 1 0.783s 13.225G
expand_mask_with_fade 1 0.001s 9.642G
apply_mask_to_image 1 0.243s 9.642G
TOTAL GRAPH EXECUTION TIME: 75.054s
TOTAL GRAPH WALL TIME: 75.059s
RAM used by InvokeAI process: 6.75G (+0.005G)
RAM used to load models: 5.72G
VRAM in use: 9.642G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 8
Models cleared from cache: 0
Cache high water mark: 8.76/0.00G
[2025-05-14 16:34:01,973]::[InvokeAI]::INFO --> Executing queue item 6826, session 2f53ed95-8979-4567-9f5c-8d847dab7c60
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|##### | 1/2 [00:00<00:00, 1.13it/s]
Loading checkpoint shards: 100%|##########| 2/2 [00:01<00:00, 1.22it/s]
Loading checkpoint shards: 100%|##########| 2/2 [00:01<00:00, 1.21it/s]
[2025-05-14 16:38:11,667]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '7844f93e-db3a-4e6d-b92a-e3e6be981e36:text_encoder_2' (T5EncoderModel) onto cuda device in 245.93s. Total model size: 9083.39MB, VRAM: 9083.39MB (100.0%)
[2025-05-14 16:38:11,923]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '7844f93e-db3a-4e6d-b92a-e3e6be981e36:tokenizer_2' (T5Tokenizer) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%)
[2025-05-14 16:38:29,017]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '3b3df4c5-776b-4f6c-bfce-274795f2eead:text_encoder' (CLIPTextModel) onto cuda device in 0.26s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%)
[2025-05-14 16:38:29,208]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '3b3df4c5-776b-4f6c-bfce-274795f2eead:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-05-14 16:38:29,964]::[InvokeAI]::INFO --> Loading model from: H:\INVOKE\models.download_cache\https__github.com_sanster_models_releases_download_add_big_lama_big-lama.pt\big-lama.pt
[2025-05-14 16:38:35,952]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'H:\INVOKE\models.download_cache\https__github.com_sanster_models_releases_download_add_big_lama_big-lama.pt\big-lama.pt' (RecursiveScriptModule) onto cuda device in 0.24s. Total model size: 194.77MB, VRAM: 194.77MB (100.0%)
[2025-05-14 16:38:37,723]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 16:39:52,052]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 61.42s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:02<01:07, 2.49s/it]
7%|7 | 2/28 [00:04<00:52, 2.02s/it]
11%|# | 3/28 [00:05<00:46, 1.87s/it]
14%|#4 | 4/28 [00:07<00:43, 1.80s/it]
18%|#7 | 5/28 [00:09<00:39, 1.73s/it]
21%|##1 | 6/28 [00:10<00:36, 1.67s/it]
25%|##5 | 7/28 [00:12<00:35, 1.68s/it]
29%|##8 | 8/28 [00:13<00:32, 1.62s/it]
32%|###2 | 9/28 [00:15<00:30, 1.58s/it]
36%|###5 | 10/28 [00:17<00:28, 1.60s/it]
39%|###9 | 11/28 [00:18<00:26, 1.59s/it]
43%|####2 | 12/28 [00:20<00:25, 1.59s/it]
46%|####6 | 13/28 [00:21<00:24, 1.60s/it]
50%|##### | 14/28 [00:23<00:22, 1.61s/it]
54%|#####3 | 15/28 [00:24<00:20, 1.58s/it]
57%|#####7 | 16/28 [00:26<00:19, 1.60s/it]
61%|###### | 17/28 [00:28<00:17, 1.60s/it]
64%|######4 | 18/28 [00:29<00:15, 1.58s/it]
68%|######7 | 19/28 [00:31<00:14, 1.58s/it]
71%|#######1 | 20/28 [00:32<00:12, 1.61s/it]
75%|#######5 | 21/28 [00:34<00:11, 1.59s/it]
79%|#######8 | 22/28 [00:36<00:09, 1.59s/it]
82%|########2 | 23/28 [00:37<00:08, 1.62s/it]
86%|########5 | 24/28 [00:39<00:06, 1.60s/it]
89%|########9 | 25/28 [00:40<00:04, 1.59s/it]
93%|#########2| 26/28 [00:42<00:03, 1.63s/it]
96%|#########6| 27/28 [00:44<00:01, 1.60s/it]
100%|##########| 28/28 [00:45<00:00, 1.57s/it]
100%|##########| 28/28 [00:45<00:00, 1.63s/it]
H:\INVOKE.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py:200: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
for field_name, field in self.model_fields.items():
[2025-05-14 16:40:38,680]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
H:\INVOKE.venv\Lib\site-packages\invokeai\app\services\shared\graph.py:427: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
if edge.destination.field not in destination_node.model_fields:
[2025-05-14 16:40:40,120]::[InvokeAI]::INFO --> Graph stats: 2f53ed95-8979-4567-9f5c-8d847dab7c60
Node Calls Seconds VRAM Used
flux_model_loader 1 0.002s 9.642G
lora_selector 1 0.001s 9.642G
collect 2 0.001s 9.642G
flux_lora_collection_loader 1 0.004s 9.642G
flux_text_encoder 1 267.414s 9.642G
core_metadata 1 0.001s 9.532G
tomask 2 0.208s 9.532G
mask_combine 1 0.074s 9.532G
img_resize 4 0.313s 9.730G
infill_lama 1 7.433s 10.499G
create_gradient_mask 1 0.227s 9.730G
expand_mask_with_fade 1 0.058s 9.730G
flux_vae_encode 1 0.271s 11.241G
flux_denoise 1 120.728s 9.741G
flux_vae_decode 1 1.216s 9.326G
apply_mask_to_image 1 0.041s 7.193G
TOTAL GRAPH EXECUTION TIME: 397.991s
TOTAL GRAPH WALL TIME: 398.000s
RAM used by InvokeAI process: 7.52G (+2.718G)
RAM used to load models: 15.24G
VRAM in use: 7.193G
RAM cache statistics:
Model cache hits: 11
Model cache misses: 7
Models cached: 7
Models cleared from cache: 2
Cache high water mark: 9.71/0.00G
[2025-05-14 16:40:40,143]::[InvokeAI]::INFO --> Executing queue item 6827, session e582f029-c496-4f01-8a98-a1dbb7c9fd47
[2025-05-14 16:40:41,073]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:01<00:41, 1.53s/it]
7%|7 | 2/28 [00:03<00:41, 1.58s/it]
11%|# | 3/28 [00:04<00:39, 1.59s/it]
14%|#4 | 4/28 [00:06<00:37, 1.57s/it]
18%|#7 | 5/28 [00:07<00:36, 1.58s/it]
21%|##1 | 6/28 [00:09<00:35, 1.60s/it]
25%|##5 | 7/28 [00:11<00:33, 1.61s/it]
29%|##8 | 8/28 [00:12<00:32, 1.60s/it]
32%|###2 | 9/28 [00:14<00:30, 1.62s/it]
36%|###5 | 10/28 [00:15<00:28, 1.59s/it]
39%|###9 | 11/28 [00:17<00:26, 1.58s/it]
43%|####2 | 12/28 [00:19<00:25, 1.62s/it]
46%|####6 | 13/28 [00:20<00:23, 1.59s/it]
50%|##### | 14/28 [00:22<00:22, 1.59s/it]
54%|#####3 | 15/28 [00:24<00:21, 1.63s/it]
57%|#####7 | 16/28 [00:25<00:19, 1.60s/it]
61%|###### | 17/28 [00:27<00:17, 1.58s/it]
64%|######4 | 18/28 [00:28<00:16, 1.62s/it]
68%|######7 | 19/28 [00:30<00:14, 1.60s/it]
71%|#######1 | 20/28 [00:31<00:12, 1.58s/it]
75%|#######5 | 21/28 [00:33<00:11, 1.61s/it]
79%|#######8 | 22/28 [00:35<00:09, 1.60s/it]
82%|########2 | 23/28 [00:36<00:07, 1.58s/it]
86%|########5 | 24/28 [00:38<00:06, 1.61s/it]
89%|########9 | 25/28 [00:39<00:04, 1.60s/it]
93%|#########2| 26/28 [00:41<00:03, 1.59s/it]
96%|#########6| 27/28 [00:43<00:01, 1.60s/it]
100%|##########| 28/28 [00:44<00:00, 1.61s/it]
100%|##########| 28/28 [00:44<00:00, 1.60s/it]
[2025-05-14 16:41:26,174]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 16:41:27,202]::[InvokeAI]::INFO --> Graph stats: e582f029-c496-4f01-8a98-a1dbb7c9fd47
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 7.193G
lora_selector 1 0.000s 7.193G
collect 2 0.003s 7.193G
flux_lora_collection_loader 1 0.001s 7.193G
flux_text_encoder 1 0.007s 7.193G
core_metadata 1 0.000s 7.193G
tomask 2 0.000s 7.193G
mask_combine 1 0.000s 7.193G
img_resize 4 0.050s 7.193G
infill_lama 1 0.000s 7.193G
flux_vae_encode 1 0.001s 7.193G
create_gradient_mask 1 0.000s 7.193G
flux_denoise 1 45.987s 7.779G
flux_vae_decode 1 0.873s 9.326G
expand_mask_with_fade 1 0.000s 7.193G
apply_mask_to_image 1 0.095s 7.193G
TOTAL GRAPH EXECUTION TIME: 47.017s
TOTAL GRAPH WALL TIME: 47.024s
RAM used by InvokeAI process: 7.51G (-0.003G)
RAM used to load models: 5.72G
VRAM in use: 7.193G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 7
Models cleared from cache: 0
Cache high water mark: 6.37/0.00G
[2025-05-14 16:41:27,229]::[InvokeAI]::INFO --> Executing queue item 6828, session 43345096-3458-4c7c-a14e-4fe17c19fcaa
[2025-05-14 16:41:27,273]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:01<00:43, 1.61s/it]
7%|7 | 2/28 [00:03<00:42, 1.65s/it]
11%|# | 3/28 [00:04<00:40, 1.60s/it]
14%|#4 | 4/28 [00:06<00:38, 1.60s/it]
18%|#7 | 5/28 [00:08<00:37, 1.63s/it]
21%|##1 | 6/28 [00:09<00:35, 1.60s/it]
25%|##5 | 7/28 [00:11<00:33, 1.59s/it]
29%|##8 | 8/28 [00:12<00:32, 1.63s/it]
32%|###2 | 9/28 [00:14<00:30, 1.60s/it]
36%|###5 | 10/28 [00:16<00:28, 1.59s/it]
39%|###9 | 11/28 [00:17<00:27, 1.63s/it]
43%|####2 | 12/28 [00:19<00:25, 1.61s/it]
46%|####6 | 13/28 [00:20<00:23, 1.59s/it]
50%|##### | 14/28 [00:22<00:22, 1.63s/it]
54%|#####3 | 15/28 [00:24<00:21, 1.64s/it]
57%|#####7 | 16/28 [00:25<00:19, 1.61s/it]
61%|###### | 17/28 [00:27<00:18, 1.65s/it]
64%|######4 | 18/28 [00:29<00:16, 1.62s/it]
68%|######7 | 19/28 [00:30<00:14, 1.60s/it]
71%|#######1 | 20/28 [00:32<00:13, 1.63s/it]
75%|#######5 | 21/28 [00:33<00:11, 1.62s/it]
79%|#######8 | 22/28 [00:35<00:09, 1.60s/it]
82%|########2 | 23/28 [00:37<00:08, 1.62s/it]
86%|########5 | 24/28 [00:38<00:06, 1.62s/it]
89%|########9 | 25/28 [00:40<00:04, 1.60s/it]
93%|#########2| 26/28 [00:41<00:03, 1.62s/it]
96%|#########6| 27/28 [00:43<00:01, 1.62s/it]
100%|##########| 28/28 [00:45<00:00, 1.61s/it]
100%|##########| 28/28 [00:45<00:00, 1.61s/it]
[2025-05-14 16:42:12,822]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 16:42:13,609]::[InvokeAI]::INFO --> Graph stats: 43345096-3458-4c7c-a14e-4fe17c19fcaa
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 7.193G
lora_selector 1 0.000s 7.193G
collect 2 0.000s 7.193G
flux_lora_collection_loader 1 0.000s 7.193G
flux_text_encoder 1 0.000s 7.193G
core_metadata 1 0.000s 7.193G
tomask 2 0.001s 7.193G
mask_combine 1 0.001s 7.193G
img_resize 4 0.040s 7.193G
infill_lama 1 0.000s 7.193G
create_gradient_mask 1 0.000s 7.193G
expand_mask_with_fade 1 0.000s 7.193G
flux_vae_encode 1 0.000s 7.193G
flux_denoise 1 45.550s 7.779G
flux_vae_decode 1 0.432s 9.326G
apply_mask_to_image 1 0.057s 7.193G
TOTAL GRAPH EXECUTION TIME: 46.081s
TOTAL GRAPH WALL TIME: 46.090s
RAM used by InvokeAI process: 7.52G (+0.000G)
RAM used to load models: 5.72G
VRAM in use: 7.193G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 7
Models cleared from cache: 0
Cache high water mark: 6.37/0.00G
H:\INVOKE.venv\Lib\site-packages\invokeai\app\services\session_queue\session_queue_common.py:151: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
if batch_data.field_name not in node.model_fields:
[2025-05-14 16:43:15,173]::[InvokeAI]::INFO --> Executing queue item 6829, session 721416f2-dea2-4db0-afac-cd79d65a7c19
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|##### | 1/2 [00:00<00:00, 1.21it/s]
Loading checkpoint shards: 100%|##########| 2/2 [00:00<00:00, 2.16it/s]
[2025-05-14 16:47:27,885]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '7844f93e-db3a-4e6d-b92a-e3e6be981e36:text_encoder_2' (T5EncoderModel) onto cuda device in 251.10s. Total model size: 9083.39MB, VRAM: 9083.39MB (100.0%)
[2025-05-14 16:47:28,054]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '7844f93e-db3a-4e6d-b92a-e3e6be981e36:tokenizer_2' (T5Tokenizer) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%)
[2025-05-14 16:47:44,638]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '3b3df4c5-776b-4f6c-bfce-274795f2eead:text_encoder' (CLIPTextModel) onto cuda device in 0.28s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%)
[2025-05-14 16:47:44,773]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '3b3df4c5-776b-4f6c-bfce-274795f2eead:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-05-14 16:49:02,557]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 60.30s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:01<00:48, 1.79s/it]
7%|7 | 2/28 [00:03<00:42, 1.64s/it]
11%|# | 3/28 [00:04<00:39, 1.58s/it]
14%|#4 | 4/28 [00:06<00:37, 1.55s/it]
18%|#7 | 5/28 [00:07<00:35, 1.53s/it]
21%|##1 | 6/28 [00:09<00:33, 1.52s/it]
25%|##5 | 7/28 [00:10<00:32, 1.52s/it]
29%|##8 | 8/28 [00:12<00:30, 1.53s/it]
32%|###2 | 9/28 [00:13<00:28, 1.52s/it]
36%|###5 | 10/28 [00:15<00:27, 1.53s/it]
39%|###9 | 11/28 [00:16<00:26, 1.53s/it]
43%|####2 | 12/28 [00:18<00:24, 1.53s/it]
46%|####6 | 13/28 [00:20<00:23, 1.54s/it]
50%|##### | 14/28 [00:21<00:21, 1.55s/it]
54%|#####3 | 15/28 [00:23<00:20, 1.54s/it]
57%|#####7 | 16/28 [00:24<00:18, 1.55s/it]
61%|###### | 17/28 [00:26<00:17, 1.55s/it]
64%|######4 | 18/28 [00:27<00:15, 1.56s/it]
68%|######7 | 19/28 [00:29<00:14, 1.57s/it]
71%|#######1 | 20/28 [00:31<00:12, 1.57s/it]
75%|#######5 | 21/28 [00:32<00:10, 1.57s/it]
79%|#######8 | 22/28 [00:34<00:09, 1.57s/it]
82%|########2 | 23/28 [00:35<00:07, 1.57s/it]
86%|########5 | 24/28 [00:37<00:06, 1.57s/it]
89%|########9 | 25/28 [00:38<00:04, 1.57s/it]
93%|#########2| 26/28 [00:40<00:03, 1.56s/it]
96%|#########6| 27/28 [00:41<00:01, 1.55s/it]
100%|##########| 28/28 [00:43<00:00, 1.55s/it]
100%|##########| 28/28 [00:43<00:00, 1.55s/it]
H:\INVOKE.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py:200: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
for field_name, field in self.model_fields.items():
[2025-05-14 16:50:41,828]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.14s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
H:\INVOKE.venv\Lib\site-packages\invokeai\app\services\shared\graph.py:427: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
if edge.destination.field not in destination_node.model_fields:
[2025-05-14 16:50:43,111]::[InvokeAI]::INFO --> Graph stats: 721416f2-dea2-4db0-afac-cd79d65a7c19
Node Calls Seconds VRAM Used
flux_model_loader 1 0.001s 7.193G
lora_selector 1 0.001s 7.193G
collect 2 0.001s 9.536G
flux_lora_collection_loader 1 0.000s 7.193G
flux_text_encoder 1 269.786s 9.554G
core_metadata 1 0.000s 9.536G
tomask 2 0.001s 9.536G
mask_combine 1 0.000s 9.536G
img_resize 4 0.155s 9.536G
infill_lama 1 0.000s 9.536G
create_gradient_mask 1 0.000s 9.536G
expand_mask_with_fade 1 0.001s 9.536G
flux_vae_encode 1 0.001s 9.536G
flux_denoise 1 122.137s 9.547G
flux_vae_decode 1 55.514s 9.147G
apply_mask_to_image 1 0.112s 7.014G
TOTAL GRAPH EXECUTION TIME: 447.710s
TOTAL GRAPH WALL TIME: 447.715s
RAM used by InvokeAI process: 7.45G (-0.065G)
RAM used to load models: 15.05G
VRAM in use: 7.014G
RAM cache statistics:
Model cache hits: 9
Model cache misses: 6
Models cached: 6
Models cleared from cache: 2
Cache high water mark: 9.50/0.00G
[2025-05-14 16:50:43,463]::[InvokeAI]::INFO --> Executing queue item 6830, session da9e79bf-0fe1-4a16-9496-b612423095f0
[2025-05-14 16:50:43,541]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:01<00:42, 1.57s/it]
7%|7 | 2/28 [00:03<00:40, 1.56s/it]
11%|# | 3/28 [00:04<00:39, 1.56s/it]
14%|#4 | 4/28 [00:06<00:37, 1.57s/it]
18%|#7 | 5/28 [00:07<00:36, 1.57s/it]
21%|##1 | 6/28 [00:09<00:35, 1.60s/it]
25%|##5 | 7/28 [00:11<00:33, 1.59s/it]
29%|##8 | 8/28 [00:12<00:31, 1.59s/it]
32%|###2 | 9/28 [00:14<00:30, 1.58s/it]
36%|###5 | 10/28 [00:15<00:28, 1.58s/it]
39%|###9 | 11/28 [00:17<00:26, 1.58s/it]
43%|####2 | 12/28 [00:18<00:25, 1.58s/it]
46%|####6 | 13/28 [00:20<00:23, 1.58s/it]
50%|##### | 14/28 [00:22<00:22, 1.58s/it]
54%|#####3 | 15/28 [00:23<00:20, 1.57s/it]
57%|#####7 | 16/28 [00:25<00:18, 1.57s/it]
61%|###### | 17/28 [00:26<00:17, 1.57s/it]
64%|######4 | 18/28 [00:28<00:15, 1.58s/it]
68%|######7 | 19/28 [00:29<00:14, 1.57s/it]
71%|#######1 | 20/28 [00:31<00:12, 1.56s/it]
75%|#######5 | 21/28 [00:33<00:10, 1.56s/it]
79%|#######8 | 22/28 [00:34<00:09, 1.60s/it]
82%|########2 | 23/28 [00:36<00:08, 1.64s/it]
86%|########5 | 24/28 [00:38<00:06, 1.67s/it]
89%|########9 | 25/28 [00:39<00:05, 1.70s/it]
93%|#########2| 26/28 [00:41<00:03, 1.71s/it]
96%|#########6| 27/28 [00:43<00:01, 1.70s/it]
100%|##########| 28/28 [00:44<00:00, 1.65s/it]
100%|##########| 28/28 [00:44<00:00, 1.61s/it]
[2025-05-14 16:51:28,865]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 16:51:29,599]::[InvokeAI]::INFO --> Graph stats: da9e79bf-0fe1-4a16-9496-b612423095f0
Node Calls Seconds VRAM Used
flux_model_loader 1 0.001s 7.014G
lora_selector 1 0.008s 7.014G
collect 2 0.000s 7.014G
flux_lora_collection_loader 1 0.000s 7.014G
flux_text_encoder 1 0.000s 7.014G
core_metadata 1 0.000s 7.014G
tomask 2 0.000s 7.014G
mask_combine 1 0.000s 7.014G
img_resize 4 0.072s 7.014G
infill_lama 1 0.000s 7.014G
flux_vae_encode 1 0.000s 7.014G
create_gradient_mask 1 0.000s 7.014G
flux_denoise 1 45.356s 7.599G
flux_vae_decode 1 0.483s 9.147G
expand_mask_with_fade 1 0.001s 7.014G
apply_mask_to_image 1 0.060s 7.014G
TOTAL GRAPH EXECUTION TIME: 45.981s
TOTAL GRAPH WALL TIME: 45.989s
RAM used by InvokeAI process: 7.45G (+0.003G)
RAM used to load models: 5.72G
VRAM in use: 7.014G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 6
Models cleared from cache: 0
Cache high water mark: 6.18/0.00G
[2025-05-14 16:51:29,639]::[InvokeAI]::INFO --> Executing queue item 6831, session 1be393e6-293f-467b-878f-c687ea070b2b
[2025-05-14 16:51:29,694]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:01<00:41, 1.55s/it]
7%|7 | 2/28 [00:03<00:40, 1.57s/it]
11%|# | 3/28 [00:04<00:39, 1.56s/it]
14%|#4 | 4/28 [00:06<00:37, 1.55s/it]
18%|#7 | 5/28 [00:07<00:35, 1.55s/it]
21%|##1 | 6/28 [00:09<00:34, 1.56s/it]
25%|##5 | 7/28 [00:10<00:32, 1.56s/it]
29%|##8 | 8/28 [00:12<00:31, 1.57s/it]
32%|###2 | 9/28 [00:14<00:29, 1.56s/it]
36%|###5 | 10/28 [00:15<00:28, 1.56s/it]
39%|###9 | 11/28 [00:17<00:26, 1.55s/it]
43%|####2 | 12/28 [00:18<00:25, 1.56s/it]
46%|####6 | 13/28 [00:20<00:23, 1.57s/it]
50%|##### | 14/28 [00:21<00:21, 1.57s/it]
54%|#####3 | 15/28 [00:23<00:20, 1.59s/it]
57%|#####7 | 16/28 [00:25<00:19, 1.59s/it]
61%|###### | 17/28 [00:26<00:17, 1.59s/it]
64%|######4 | 18/28 [00:28<00:15, 1.59s/it]
68%|######7 | 19/28 [00:29<00:14, 1.58s/it]
71%|#######1 | 20/28 [00:31<00:12, 1.57s/it]
75%|#######5 | 21/28 [00:33<00:11, 1.60s/it]
79%|#######8 | 22/28 [00:34<00:09, 1.65s/it]
82%|########2 | 23/28 [00:36<00:08, 1.67s/it]
86%|########5 | 24/28 [00:38<00:06, 1.70s/it]
89%|########9 | 25/28 [00:40<00:05, 1.71s/it]
93%|#########2| 26/28 [00:41<00:03, 1.72s/it]
96%|#########6| 27/28 [00:43<00:01, 1.73s/it]
100%|##########| 28/28 [00:45<00:00, 1.74s/it]
100%|##########| 28/28 [00:45<00:00, 1.62s/it]
[2025-05-14 16:52:15,412]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 16:52:16,203]::[InvokeAI]::INFO --> Graph stats: 1be393e6-293f-467b-878f-c687ea070b2b
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 7.014G
lora_selector 1 0.000s 7.014G
collect 2 0.001s 7.014G
flux_lora_collection_loader 1 0.000s 7.014G
flux_text_encoder 1 0.000s 7.014G
core_metadata 1 0.000s 7.014G
tomask 2 0.000s 7.014G
mask_combine 1 0.000s 7.014G
img_resize 4 0.053s 7.014G
infill_lama 1 0.001s 7.014G
flux_vae_encode 1 0.001s 7.014G
create_gradient_mask 1 0.000s 7.014G
flux_denoise 1 45.734s 7.599G
flux_vae_decode 1 0.474s 9.147G
expand_mask_with_fade 1 0.000s 7.014G
apply_mask_to_image 1 0.149s 7.014G
TOTAL GRAPH EXECUTION TIME: 46.413s
TOTAL GRAPH WALL TIME: 46.420s
RAM used by InvokeAI process: 7.45G (-0.002G)
RAM used to load models: 5.72G
VRAM in use: 7.014G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 6
Models cleared from cache: 0
Cache high water mark: 6.18/0.00G
H:\INVOKE.venv\Lib\site-packages\invokeai\app\services\session_queue\session_queue_common.py:151: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
if batch_data.field_name not in node.model_fields:
[2025-05-14 16:52:56,202]::[InvokeAI]::INFO --> Executing queue item 6832, session b8a012cd-0dee-4828-bbb4-56cb6279874f
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|##### | 1/2 [00:00<00:00, 1.75it/s]
Loading checkpoint shards: 100%|##########| 2/2 [00:00<00:00, 3.36it/s]
[2025-05-14 16:56:55,134]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '7844f93e-db3a-4e6d-b92a-e3e6be981e36:text_encoder_2' (T5EncoderModel) onto cuda device in 237.67s. Total model size: 9083.39MB, VRAM: 9083.39MB (100.0%)
[2025-05-14 16:56:55,303]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '7844f93e-db3a-4e6d-b92a-e3e6be981e36:tokenizer_2' (T5Tokenizer) onto cuda device in 0.00s. Total model size: 0.03MB, VRAM: 0.00MB (0.0%)
[2025-05-14 16:57:11,986]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '3b3df4c5-776b-4f6c-bfce-274795f2eead:text_encoder' (CLIPTextModel) onto cuda device in 0.25s. Total model size: 469.44MB, VRAM: 469.44MB (100.0%)
[2025-05-14 16:57:12,113]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model '3b3df4c5-776b-4f6c-bfce-274795f2eead:tokenizer' (CLIPTokenizer) onto cuda device in 0.00s. Total model size: 0.00MB, VRAM: 0.00MB (0.0%)
[2025-05-14 16:58:26,798]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 60.06s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:01<00:47, 1.75s/it]
7%|7 | 2/28 [00:03<00:41, 1.58s/it]
11%|# | 3/28 [00:04<00:38, 1.53s/it]
14%|#4 | 4/28 [00:06<00:36, 1.54s/it]
18%|#7 | 5/28 [00:07<00:34, 1.52s/it]
21%|##1 | 6/28 [00:09<00:33, 1.50s/it]
25%|##5 | 7/28 [00:10<00:31, 1.49s/it]
29%|##8 | 8/28 [00:12<00:29, 1.49s/it]
32%|###2 | 9/28 [00:13<00:28, 1.48s/it]
36%|###5 | 10/28 [00:15<00:26, 1.48s/it]
39%|###9 | 11/28 [00:16<00:25, 1.48s/it]
43%|####2 | 12/28 [00:18<00:23, 1.48s/it]
46%|####6 | 13/28 [00:19<00:22, 1.49s/it]
50%|##### | 14/28 [00:21<00:21, 1.50s/it]
54%|#####3 | 15/28 [00:22<00:19, 1.49s/it]
57%|#####7 | 16/28 [00:24<00:18, 1.52s/it]
61%|###### | 17/28 [00:25<00:17, 1.57s/it]
64%|######4 | 18/28 [00:27<00:16, 1.62s/it]
68%|######7 | 19/28 [00:29<00:14, 1.65s/it]
71%|#######1 | 20/28 [00:30<00:13, 1.67s/it]
75%|#######5 | 21/28 [00:32<00:11, 1.68s/it]
79%|#######8 | 22/28 [00:34<00:10, 1.70s/it]
82%|########2 | 23/28 [00:36<00:08, 1.71s/it]
86%|########5 | 24/28 [00:37<00:06, 1.71s/it]
89%|########9 | 25/28 [00:39<00:05, 1.71s/it]
93%|#########2| 26/28 [00:41<00:03, 1.66s/it]
96%|#########6| 27/28 [00:42<00:01, 1.62s/it]
100%|##########| 28/28 [00:44<00:00, 1.58s/it]
100%|##########| 28/28 [00:44<00:00, 1.58s/it]
H:\INVOKE.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py:200: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
for field_name, field in self.model_fields.items():
[2025-05-14 17:00:07,264]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.12s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
H:\INVOKE.venv\Lib\site-packages\invokeai\app\services\shared\graph.py:427: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
if edge.destination.field not in destination_node.model_fields:
[2025-05-14 17:00:08,309]::[InvokeAI]::INFO --> Graph stats: b8a012cd-0dee-4828-bbb4-56cb6279874f
Node Calls Seconds VRAM Used
flux_model_loader 1 0.000s 7.014G
lora_selector 1 0.000s 7.014G
collect 2 0.001s 9.542G
flux_lora_collection_loader 1 0.000s 7.014G
flux_text_encoder 1 256.246s 9.560G
core_metadata 1 0.000s 9.542G
tomask 2 0.001s 9.542G
mask_combine 1 0.001s 9.542G
img_resize 4 0.093s 9.542G
infill_lama 1 0.000s 9.542G
create_gradient_mask 1 0.000s 9.542G
expand_mask_with_fade 1 0.000s 9.542G
flux_vae_encode 1 0.000s 9.542G
flux_denoise 1 118.562s 9.553G
flux_vae_decode 1 56.903s 9.147G
apply_mask_to_image 1 0.074s 7.014G
TOTAL GRAPH EXECUTION TIME: 431.881s
TOTAL GRAPH WALL TIME: 431.888s
RAM used by InvokeAI process: 7.45G (+0.001G)
RAM used to load models: 15.05G
VRAM in use: 7.014G
RAM cache statistics:
Model cache hits: 9
Model cache misses: 6
Models cached: 6
Models cleared from cache: 2
Cache high water mark: 9.50/0.00G
[2025-05-14 17:00:08,639]::[InvokeAI]::INFO --> Executing queue item 6833, session 037d337b-1486-41ad-9b07-3614f55b6b29
[2025-05-14 17:00:08,714]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
0%| | 0/28 [00:00<?, ?it/s]
4%|3 | 1/28 [00:01<00:40, 1.51s/it]
7%|7 | 2/28 [00:03<00:39, 1.51s/it]
11%|# | 3/28 [00:04<00:38, 1.55s/it]
14%|#4 | 4/28 [00:06<00:37, 1.54s/it]
18%|#7 | 5/28 [00:07<00:34, 1.52s/it]
21%|##1 | 6/28 [00:09<00:33, 1.53s/it]
25%|##5 | 7/28 [00:10<00:31, 1.52s/it]
29%|##8 | 8/28 [00:12<00:30, 1.51s/it]
32%|###2 | 9/28 [00:13<00:29, 1.53s/it]
36%|###5 | 10/28 [00:15<00:27, 1.54s/it]
39%|###9 | 11/28 [00:16<00:26, 1.56s/it]
43%|####2 | 12/28 [00:18<00:25, 1.59s/it]
46%|####6 | 13/28 [00:20<00:24, 1.61s/it]
50%|##### | 14/28 [00:21<00:22, 1.60s/it]
54%|#####3 | 15/28 [00:23<00:20, 1.57s/it]
57%|#####7 | 16/28 [00:24<00:18, 1.56s/it]
61%|###### | 17/28 [00:26<00:17, 1.55s/it]
64%|######4 | 18/28 [00:27<00:15, 1.55s/it]
68%|######7 | 19/28 [00:29<00:13, 1.54s/it]
71%|#######1 | 20/28 [00:31<00:12, 1.57s/it]
75%|#######5 | 21/28 [00:32<00:10, 1.56s/it]
79%|#######8 | 22/28 [00:34<00:09, 1.56s/it]
82%|########2 | 23/28 [00:35<00:07, 1.58s/it]
86%|########5 | 24/28 [00:37<00:06, 1.58s/it]
89%|########9 | 25/28 [00:38<00:04, 1.55s/it]
93%|#########2| 26/28 [00:40<00:03, 1.55s/it]
96%|#########6| 27/28 [00:41<00:01, 1.54s/it]
100%|##########| 28/28 [00:43<00:00, 1.53s/it]
100%|##########| 28/28 [00:43<00:00, 1.55s/it]
[2025-05-14 17:00:52,532]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'ac0e3aa4-8f5b-42de-88e4-5b4581a3e9ef:vae' (AutoEncoder) onto cuda device in 0.01s. Total model size: 159.87MB, VRAM: 159.87MB (100.0%)
[2025-05-14 17:00:53,352]::[InvokeAI]::INFO --> Graph stats: 037d337b-1486-41ad-9b07-3614f55b6b29
Node Calls Seconds VRAM Used
flux_model_loader 1 0.001s 7.014G
lora_selector 1 0.006s 7.014G
collect 2 0.000s 7.014G
flux_lora_collection_loader 1 0.000s 7.014G
flux_text_encoder 1 0.000s 7.014G
core_metadata 1 0.001s 7.014G
tomask 2 0.001s 7.014G
mask_combine 1 0.000s 7.014G
img_resize 4 0.104s 7.014G
infill_lama 1 0.000s 7.014G
create_gradient_mask 1 0.000s 7.014G
expand_mask_with_fade 1 0.000s 7.014G
flux_vae_encode 1 0.000s 7.014G
flux_denoise 1 43.847s 7.599G
flux_vae_decode 1 0.470s 9.147G
apply_mask_to_image 1 0.127s 7.014G
TOTAL GRAPH EXECUTION TIME: 44.557s
TOTAL GRAPH WALL TIME: 44.564s
RAM used by InvokeAI process: 7.45G (-0.007G)
RAM used to load models: 5.72G
VRAM in use: 7.014G
RAM cache statistics:
Model cache hits: 3
Model cache misses: 0
Models cached: 6
Models cleared from cache: 0
Cache high water mark: 6.18/0.00G
[2025-05-14 17:00:53,412]::[InvokeAI]::INFO --> Executing queue item 6834, session c26e6d6e-1ccb-42f5-be07-8d9c3b15b01d
[2025-05-14 17:00:53,455]::[ModelManagerService]::INFO --> [MODEL CACHE] Loaded model 'a5fe000a-27f0-4ef3-91fd-ef1f597d10f0:transformer' (Flux) onto cuda device in 0.00s. Total model size: 5679.45MB, VRAM: 5679.45MB (100.0%)
What you expected to happen
Normal generation, without constant loading of models.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response