Skip to content

Commit 9868edb

Browse files
authored
move vlm notebooks to main optimum (#2528)
1 parent 08f8dc8 commit 9868edb

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

notebooks/llava-multimodal-chatbot/llava-multimodal-chatbot-genai.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,7 @@
9898
"import requests\n",
9999
"\n",
100100
"%pip install -q \"torch>=2.1.0\" \"torchvision\" \"torchaudio\" --index-url https://download.pytorch.org/whl/cpu\n",
101-
"%pip install -q \"git+https://github.com/eaidova/optimum-intel.git@ea/minicpmv\"\n",
101+
"%pip install -q \"git+https://github.com/huggingface/optimum-intel.git\"\n",
102102
"%pip install -q \"nncf>=2.13.0\" \"sentencepiece\" \"tokenizers>=0.12.1\" \"transformers>=4.45.0\" \"gradio>=4.36\"\n",
103103
"%pip install -q -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino-tokenizers openvino openvino-genai\n",
104104
"\n",

notebooks/llava-multimodal-chatbot/llava-multimodal-chatbot-optimum.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@
100100
"import requests\n",
101101
"\n",
102102
"%pip install -q \"torch>=2.1.0\" \"torchvision\" \"torchaudio\" --index-url https://download.pytorch.org/whl/cpu\n",
103-
"%pip install -q \"git+https://github.com/eaidova/optimum-intel.git@ea/minicpmv\"\n",
103+
"%pip install -q \"git+https://github.com/hugggingface/optimum-intel.git\"\n",
104104
"%pip install -q \"nncf>=2.13.0\" \"sentencepiece\" \"tokenizers>=0.12.1\" \"transformers>=4.45.0\" \"gradio>=4.36\"\n",
105105
"%pip install -q -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly openvino-tokenizers openvino openvino-genai\n",
106106
"\n",
@@ -282,7 +282,7 @@
282282
"## Prepare OpenVINO based inference pipeline\n",
283283
"[back to top ⬆️](#Table-of-contents:)\n",
284284
"\n",
285-
"OpenVINO integration with Optimum Intel provides ready-to-use API for model inference that can be used for smooth integration with transformers-based solutions. For loading pixtral model, we will use `OVModelForVisualCausalLM` class that have compatible interface with Transformers LLaVA implementation. For loading a model, `from_pretrained` method should be used. It accepts path to the model directory or model_id from HuggingFace hub (if model is not converted to OpenVINO format, conversion will be triggered automatically). Additionally, we can provide an inference device, quantization config (if model has not been quantized yet) and device-specific OpenVINO Runtime configuration. More details about model inference with Optimum Intel can be found in [documentation](https://huggingface.co/docs/optimum/intel/openvino/inference). \n"
285+
"OpenVINO integration with Optimum Intel provides ready-to-use API for model inference that can be used for smooth integration with transformers-based solutions. For loading model, we will use `OVModelForVisualCausalLM` class that have compatible interface with Transformers LLaVA implementation. For loading a model, `from_pretrained` method should be used. It accepts path to the model directory or model_id from HuggingFace hub (if model is not converted to OpenVINO format, conversion will be triggered automatically). Additionally, we can provide an inference device, quantization config (if model has not been quantized yet) and device-specific OpenVINO Runtime configuration. More details about model inference with Optimum Intel can be found in [documentation](https://huggingface.co/docs/optimum/intel/openvino/inference). \n"
286286
]
287287
},
288288
{

notebooks/nano-llava-multimodal-chatbot/nano-llava-multimodal-chatbot.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@
5959
"%pip install -q \"torch>=2.1\" \"transformers>=4.40\" \"accelerate\" \"pillow\" \"gradio>=4.26\" \"tqdm\" --extra-index-url https://download.pytorch.org/whl/cpu\n",
6060
"%pip install -q \"nncf>=2.13\"\n",
6161
"%pip install -q -U --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly \"openvino-tokenizers[transformers]\" \"openvino>=2024.4.0\"\n",
62-
"%pip install -q \"git+https://github.com/eaidova/optimum-intel.git@ea/minicpmv\""
62+
"%pip install -q \"git+https://github.com/huggingface/optimum-intel.git\""
6363
]
6464
},
6565
{

0 commit comments

Comments
 (0)