Description
Is there an existing issue for this problem?
- I have searched the existing issues
Operating system
Linux
GPU vendor
AMD (ROCm)
GPU model
RX 6800
GPU VRAM
16 GB
Version number
5.10.1
Browser
Librewolf 137.0.2-1
Python dependencies
{
"version": "5.10.1",
"dependencies": {
"accelerate" : "1.6.0" ,
"compel" : "2.0.2" ,
"cuda" : null ,
"diffusers" : "0.33.0" ,
"numpy" : "1.26.3" ,
"opencv" : "4.9.0.80" ,
"onnx" : "1.16.1" ,
"pillow" : "11.0.0" ,
"python" : "3.12.9" ,
"torch" : "2.6.0+rocm6.2.4" ,
"torchvision" : "0.21.0+rocm6.2.4",
"transformers": "4.51.3" ,
"xformers" : null
},
"config": {
"schema_version": "4.0.2",
"legacy_models_yaml_path": null,
"host": "127.0.0.1",
"port": 9090,
"allow_origins": [],
"allow_credentials": true,
"allow_methods": [""],
"allow_headers": [""],
"ssl_certfile": null,
"ssl_keyfile": null,
"log_tokenization": false,
"patchmatch": true,
"models_dir": "models",
"convert_cache_dir": "models/.convert_cache",
"download_cache_dir": "models/.download_cache",
"legacy_conf_dir": "configs",
"db_dir": "databases",
"outputs_dir": "outputs",
"custom_nodes_dir": "nodes",
"style_presets_dir": "style_presets",
"workflow_thumbnails_dir": "workflow_thumbnails",
"log_handlers": ["console"],
"log_format": "color",
"log_level": "info",
"log_sql": false,
"log_level_network": "warning",
"use_memory_db": false,
"dev_reload": false,
"profile_graphs": false,
"profile_prefix": null,
"profiles_dir": "profiles",
"max_cache_ram_gb": null,
"max_cache_vram_gb": null,
"log_memory_usage": false,
"device_working_mem_gb": 4,
"enable_partial_loading": true,
"keep_ram_copy_of_weights": true,
"ram": null,
"vram": null,
"lazy_offload": true,
"pytorch_cuda_alloc_conf": "backend:hipMallocAsync",
"device": "auto",
"precision": "auto",
"sequential_guidance": false,
"attention_type": "auto",
"attention_slice_size": "auto",
"force_tiled_decode": false,
"pil_compress_level": 1,
"max_queue_size": 10000,
"clear_queue_on_startup": false,
"allow_nodes": null,
"deny_nodes": null,
"node_cache_size": 512,
"hashing_algorithm": "blake3_single",
"remote_api_tokens": null,
"scan_models_on_startup": false
},
"set_config_fields": [
"enable_partial_loading" , "device_working_mem_gb" , "pytorch_cuda_alloc_conf", "legacy_models_yaml_path"
]
}
What happened
Unable to generate any images due to an SSL error when connecting to HuggingFace. All my models are in safetensors format. I can log in to HF as usual in browser with no errors. The problem seems to be with InvokeAI's connection specifically. It happens with or without headless mode.
What you expected to happen
I expected image generation to work without error whether or not I can connect to an external website.
How to reproduce the problem
No response
Additional context
Log:
Starting up...
Started Invoke process with PID: 83590
amdgpu.ids: No such file or directory
[2025-04-27 01:42:50,402]::[InvokeAI]::INFO --> PyTorch CUDA memory allocator: native
[2025-04-27 01:42:50,405]::[InvokeAI]::INFO --> Using torch device: AMD Radeon Graphics
Could not load bitsandbytes native library: 'NoneType' object has no attribute 'split'
Traceback (most recent call last):
File "/InvokeAI/.venv/lib/python3.12/site-packages/bitsandbytes/cextension.py", line 85, in <module>
lib = get_native_library()
^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/bitsandbytes/cextension.py", line 64, in get_native_library
cuda_specs = get_cuda_specs()
^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/bitsandbytes/cuda_specs.py", line 39, in get_cuda_specs
cuda_version_string=(get_cuda_version_string()),
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/bitsandbytes/cuda_specs.py", line 29, in get_cuda_version_string
major, minor = get_cuda_version_tuple()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/bitsandbytes/cuda_specs.py", line 24, in get_cuda_version_tuple
major, minor = map(int, torch.version.cuda.split("."))
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'split'
CUDA Setup failed despite CUDA being available. Please run the following command to get more information:
python -m bitsandbytes
Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/bitsandbytes-foundation/bitsandbytes/issues
[2025-04-27 01:42:51,730]::[InvokeAI]::INFO --> cuDNN version: 3002000
>> patchmatch.patch_match: INFO - Compiling and loading c extensions from "/InvokeAI/.venv/lib/python3.12/site-packages/patchmatch".
>> patchmatch.patch_match: ERROR - patchmatch failed to load or compile (/usr/lib64/libtiff.so.6: undefined symbol: jpeg12_write_raw_data, version LIBJPEG_8.0).
>> patchmatch.patch_match: INFO - Refer to https://invoke-ai.github.io/InvokeAI/installation/060_INSTALL_PATCHMATCH/ for installation instructions.
[2025-04-27 01:42:56,012]::[InvokeAI]::INFO --> Patchmatch not loaded (nonfatal)
[2025-04-27 01:42:56,318]::[InvokeAI]::INFO --> Loading node pack clothing-mask-node
[2025-04-27 01:42:56,320]::[InvokeAI]::INFO --> Loading node pack simple-skin-detection-node
[2025-04-27 01:42:56,323]::[InvokeAI]::INFO --> Loading node pack adapters-linked-nodes
[2025-04-27 01:42:56,333]::[InvokeAI]::INFO --> Loaded 3 node packs from /InvokeAI/nodes: clothing-mask-node, simple-skin-detection-node, adapters-linked-nodes
[2025-04-27 01:42:56,342]::[InvokeAI]::INFO --> InvokeAI version 5.10.1
[2025-04-27 01:42:56,342]::[InvokeAI]::INFO --> Root directory = /InvokeAI
[2025-04-27 01:42:56,342]::[InvokeAI]::INFO --> Initializing database at /InvokeAI/databases/invokeai.db
[2025-04-27 01:42:56,348]::[ModelManagerService]::INFO --> [MODEL CACHE] Calculated model RAM cache size: 12272.00 MB. Heuristics applied: [1, 2].
[2025-04-27 01:42:56,370]::[InvokeAI]::INFO --> Pruned 2 finished queue items
[2025-04-27 01:42:56,420]::[InvokeAI]::INFO --> Cleaned database (freed 0.09MB)
[2025-04-27 01:42:56,420]::[InvokeAI]::INFO --> Invoke running on http://127.0.0.1:9090 (Press CTRL+C to quit)
[2025-04-27 01:43:03,277]::[InvokeAI]::INFO --> Executing queue item 1405, session be638699-710a-4a11-ae92-9b6e79632704
[2025-04-27 01:43:03,677]::[InvokeAI]::ERROR --> Error while invoking session be638699-710a-4a11-ae92-9b6e79632704, invocation d7498ed4-9ccb-4f95-8597-153a594a0544 (sdxl_compel_prompt): (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/stable-diffusion-xl-base-1.0/revision/main (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3036)')))"), '(Request ID: 36609ce2-60bb-4f38-a4f3-f5e3097d3161)')
[2025-04-27 01:43:03,677]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/InvokeAI/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/InvokeAI/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/InvokeAI/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 395, in connect
self.ssl_context = create_urllib3_context(
^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/urllib3/util/ssl_.py", line 290, in create_urllib3_context
context = SSLContext(ssl_version)
^^^^^^^^^^^^^^^^^^^^^^^
File "/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/ssl.py", line 438, in __new__
self = _SSLContext.__new__(cls, protocol)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ssl.SSLError: unknown error (_ssl.c:3036)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/InvokeAI/.venv/lib/python3.12/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/stable-diffusion-xl-base-1.0/revision/main (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3036)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/app/invocations/baseinvocation.py", line 212, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/app/invocations/compel.py", line 268, in invoke
c1, c1_pooled = self.run_clip_compel(context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/app/invocations/compel.py", line 141, in run_clip_compel
text_encoder_info = context.models.load(clip_field.text_encoder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/app/services/shared/invocation_context.py", line 394, in load
return self._services.model_manager.load.load_model(model, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/app/services/model_load/model_load_default.py", line 71, in load_model
).load_model(model_config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/backend/model_manager/load/load_default.py", line 56, in load_model
cache_record = self._load_and_cache(model_config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/backend/model_manager/load/load_default.py", line 77, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py", line 62, in _load_model
return self._load_from_singlefile(config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py", line 125, in _load_from_singlefile
pipeline = load_class.from_single_file(config.path, torch_dtype=self._torch_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/diffusers/loaders/single_file.py", line 417, in from_single_file
cached_model_config_path = _download_diffusers_model_config_from_hub(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/diffusers/loaders/single_file.py", line 252, in _download_diffusers_model_config_from_hub
cached_model_path = snapshot_download(
^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/_snapshot_download.py", line 155, in snapshot_download
repo_info = api.repo_info(repo_id=repo_id, repo_type=repo_type, revision=revision, token=token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2807, in repo_info
return method(
^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2591, in model_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/requests/sessions.py", line 600, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/InvokeAI/.venv/lib/python3.12/site-packages/requests/adapters.py", line 563, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/stabilityai/stable-diffusion-xl-base-1.0/revision/main (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3036)')))"), '(Request ID: 36609ce2-60bb-4f38-a4f3-f5e3097d3161)')
/InvokeAI/.venv/lib/python3.12/site-packages/invokeai/app/services/shared/graph.py:427: PydanticDeprecatedSince211: Accessing the 'model_fields' attribute on the instance is deprecated. Instead, you should access this attribute from the model class. Deprecated in Pydantic V2.11 to be removed in V3.0.
if edge.destination.field not in destination_node.model_fields:
[2025-04-27 01:43:03,688]::[InvokeAI]::INFO --> Graph stats: be638699-710a-4a11-ae92-9b6e79632704
Node Calls Seconds VRAM Used
sdxl_model_loader 1 0.003s 0.000G
sdxl_compel_prompt 1 0.388s 0.000G
TOTAL GRAPH EXECUTION TIME: 0.391s
TOTAL GRAPH WALL TIME: 0.392s
RAM used by InvokeAI process: 1.03G (+0.074G)
RAM used to load models: 0.00G
RAM cache statistics:
Model cache hits: 0
Model cache misses: 1
Models cached: 0
Models cleared from cache: 0
Cache high water mark: 0.00/0.00G
Shutting down...
[2025-04-27 01:43:10,356]::[ModelInstallService]::INFO --> Installer thread 139853424985792 exiting
Process exited with signal SIGTERM
Discord username
No response