Skip to content

Commit 2bdb429

Browse files
heathen711psychedelicious
authored andcommitted
run "make frontend-typegen"
1 parent 5ba85ba commit 2bdb429

File tree

1 file changed

+2
-3
lines changed
  • invokeai/frontend/web/src/services/api

1 file changed

+2
-3
lines changed

invokeai/frontend/web/src/services/api/schema.ts

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11991,7 +11991,7 @@ export type components = {
1199111991
* vram: DEPRECATED: This setting is no longer used. It has been replaced by `max_cache_vram_gb`, but most users will not need to use this config since automatic cache size limits should work well in most cases. This config setting will be removed once the new model cache behavior is stable.
1199211992
* lazy_offload: DEPRECATED: This setting is no longer used. Lazy-offloading is enabled by default. This config setting will be removed once the new model cache behavior is stable.
1199311993
* pytorch_cuda_alloc_conf: Configure the Torch CUDA memory allocator. This will impact peak reserved VRAM usage and performance. Setting to "backend:cudaMallocAsync" works well on many systems. The optimal configuration is highly dependent on the system configuration (device type, VRAM, CUDA driver version, etc.), so must be tuned experimentally.
11994-
* device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `cuda:1`, `mps`
11994+
* device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `mps`, `cuda:N` (where N is a device number)
1199511995
* precision: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.<br>Valid values: `auto`, `float16`, `bfloat16`, `float32`
1199611996
* sequential_guidance: Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
1199711997
* attention_type: Attention type.<br>Valid values: `auto`, `normal`, `xformers`, `sliced`, `torch-sdp`
@@ -12268,9 +12268,8 @@ export type components = {
1226812268
* Device
1226912269
* @description Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.
1227012270
* @default auto
12271-
* @enum {string}
1227212271
*/
12273-
device?: "auto" | "cpu" | "cuda" | "cuda:1" | "cuda:2" | "cuda:3" | "mps";
12272+
device?: string;
1227412273
/**
1227512274
* Precision
1227612275
* @description Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.

0 commit comments

Comments
 (0)