Skip to content

Commit 67fbd65

Browse files
fix spell issue
1 parent 64bb7db commit 67fbd65

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

.ci/spellcheck/.pyspelling.wordlist.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -454,6 +454,8 @@ KiTS
454454
Kokoro
455455
Koltun
456456
Kondate
457+
Kontext
458+
kontext
457459
Kosaraju
458460
kosmos
459461
Kosmos

notebooks/flux.1-kontext/flux.1-kontext.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -393,7 +393,7 @@
393393
"id": "cab6790e",
394394
"metadata": {},
395395
"source": [
396-
"OpenVINO integration with Optimum Intel provides ready-to-use API for model inference that can be used for smooth integration with transformers-based solutions. For loading Flux.1 Kontest model, we will use `OVFluxKontextPipeline` class that have compatible interface with Diffuers `FluxKontextPipeline` implementation. For loading a model, `from_pretrained` method should be used. It accepts path to the model directory or model_id from HuggingFace hub (if model is not converted to OpenVINO format, conversion will be triggered automatically). Additionally, we can provide an inference device, quantization config (if model has not been quantized yet) and device-specific OpenVINO Runtime configuration. More details about model inference with Optimum Intel can be found in [documentation](https://huggingface.co/docs/optimum/intel/openvino/inference)."
396+
"OpenVINO integration with Optimum Intel provides ready-to-use API for model inference that can be used for smooth integration with transformers-based solutions. For loading Flux.1 Kontext model, we will use `OVFluxKontextPipeline` class that have compatible interface with Diffusers `FluxKontextPipeline` implementation. For loading a model, `from_pretrained` method should be used. It accepts path to the model directory or model_id from HuggingFace hub (if model is not converted to OpenVINO format, conversion will be triggered automatically). Additionally, we can provide an inference device, quantization config (if model has not been quantized yet) and device-specific OpenVINO Runtime configuration. More details about model inference with Optimum Intel can be found in [documentation](https://huggingface.co/docs/optimum/intel/openvino/inference)."
397397
]
398398
},
399399
{

0 commit comments

Comments
 (0)