Skip to content

Commit c9658a3

Browse files
[release branch] Continue demo name change (#3225)
* [main] Continue plugin demo (#3203) (#3217) CVS-165486 --------- Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
1 parent b966e87 commit c9658a3

File tree

6 files changed

+3
-2
lines changed

6 files changed

+3
-2
lines changed

demos/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ ovms_demos_continuous_batching
1111
ovms_demos_continuous_batching_vlm
1212
ovms_demos_llm_npu
1313
ovms_demos_vlm_npu
14+
ovms_demos_code_completion_vsc
1415
ovms_demo_clip_image_classification
1516
ovms_demo_age_gender_guide
1617
ovms_demo_horizontal_text_detection
@@ -52,7 +53,7 @@ OpenVINO Model Server demos have been created to showcase the usage of the model
5253
|[RAG with OpenAI API endpoint and langchain](https://github.com/openvinotoolkit/model_server/blob/main/demos/continuous_batching/rag/rag_demo.ipynb)| Example how to use RAG with model server endpoints|
5354
|[LLM on NPU](./llm_npu/README.md)| Generate text with LLM models and NPU acceleration|
5455
|[VLM on NPU](./vlm_npu/README.md)| Generate text with VLM models and NPU acceleration|
55-
|[VisualCode assistant](./code_completion_copilot/README.md)|Use Continue extension in Visual Studio Code with local OVMS|
56+
|[Visual Studio Code assistant](./code_local_assistant/README.md)|Use Continue extension to Visual Studio Code with local OVMS serving|
5657

5758

5859
Check out the list below to see complete step-by-step examples of using OpenVINO Model Server with real world use cases:

demos/code_completion_copilot/README.md renamed to demos/code_local_assistant/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Code Completion and Copilot served via OpenVINO Model Server
1+
# Visual Studio Code Local Assistant {#ovms_demos_code_completion_vsc}
22

33
## Intro
44
With the rise of AI PC capabilities, hosting own Visual Studio code assistant is at your reach. In this demo, we will showcase how to deploy local LLM serving with OVMS and integrate it with Continue extension. It will employ iGPU or NPU acceleration.

0 commit comments

Comments
 (0)