Skip to content

Commit e098dae

Browse files
authored
CVS-46868 documentation fixes (#466)
1 parent a5887f5 commit e098dae

File tree

4 files changed

+11
-12
lines changed

4 files changed

+11
-12
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ A few key features:
2828
- [Model reshaping](docs/shape_and_batch_size.md). The server supports reshaping models in runtime.
2929
- [Directed Acyclic Graph Scheduler](docs/dag_scheduler.md) Connect multiple models to deploy complex processing solutions and reduce overhead of sending data back and forth.
3030

31-
**Note:** OVMS has been tested on CentOS* and Ubuntu*. Publically released docker images are based on CentOS.
31+
**Note:** OVMS has been tested on CentOS* and Ubuntu*. Publicly released docker images are based on CentOS.
3232

3333

3434
## Run OpenVINO Model Server
@@ -78,7 +78,7 @@ Learn more about tests in the [developer guide](docs/developer_guide.md)
7878

7979
* All contributed code must be compatible with the [Apache 2](https://www.apache.org/licenses/LICENSE-2.0) license.
8080

81-
* All changes needs to have pass linter, unit and functional tests.
81+
* All changes have to have pass style, unit and functional tests.
8282

8383
* All new features need to be covered by tests.
8484

deploy/README.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,12 @@ inference requests to the running server.
1010

1111
## Installing Helm
1212

13-
Please refer to: https://helm.sh/docs/intro/install for Helm installation.
13+
Please refer to [Helm installation](https://helm.sh/docs/intro/install) guide.
1414

1515
## Model Repository
1616

1717
If you already have a model repository you may use that with this helm chart. If you don't, you can use any model
18-
from https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/.
18+
from the [models zoo](https://download.01.org/opencv/2021/openvinotoolkit/2021.2/open_model_zoo/models_bin/).
1919

2020
Model Server requires a repository of models to execute inference requests. For example, you can
2121
use a Google Cloud Storage (GCS) bucket:
@@ -39,9 +39,8 @@ are needed and you can proceed to _Deploy the Model Server_ section.
3939
Bucket permissions can be set with the _GOOGLE_APPLICATION_CREDENTIALS_ environment variable. Please follow the steps below:
4040

4141
* Generate Google service account JSON file with permissions: _Storage Legacy Bucket Reader_, _Storage Legacy Object Reader_, _Storage Object Viewer_. Name a file for example: _gcp-creds.json_
42-
(you can follow these instructions to create a Service Account and download JSON:
43-
https://cloud.google.com/docs/authentication/getting-started#creating_a_service_account)
44-
* Create a Kubernetes secret from this JSON file:
42+
(you can follow these [instructions](https://cloud.google.com/docs/authentication/getting-started#creating_a_service_account) to create a Service Account and download a JSON file)
43+
* Create a Kubernetes secret from the JSON file:
4544

4645
$ kubectl create secret generic gcpcreds --from-file gcp-creds.json
4746

@@ -86,8 +85,8 @@ $ helm install ovms ovms --set model_name=resnet50-binary-0001,model_path=gs://m
8685

8786
## Deploy Model Server with a Configuration File
8887

89-
To serve multiple models you can run Model Server with a configuration file as described here:
90-
https://github.com/openvinotoolkit/model_server/blob/master/docs/docker_container.md#starting-docker-container-with-a-configuration-file
88+
To serve multiple models you can run Model Server with a
89+
[configuration file](https://github.com/openvinotoolkit/model_server/blob/master/docs/docker_container.md#starting-docker-container-with-a-configuration-file).
9190

9291
To deploy with config file:
9392
* create a configuration file named _config.json_ and fill it with proper information
@@ -113,7 +112,7 @@ openvino-model-server LoadBalancer 10.121.14.253 1.2.3.4 8080:3004
113112

114113
The server exposes an gRPC endpoint on 8080 port and REST endpoint on 8081 port.
115114

116-
Follow the instructions here: https://github.com/openvinotoolkit/model_server/tree/master/example_client#submitting-grpc-requests-based-on-a-dataset-from-a-list-of-jpeg-files
115+
Follow the [instructions](https://github.com/openvinotoolkit/model_server/tree/master/example_client#submitting-grpc-requests-based-on-a-dataset-from-a-list-of-jpeg-files)
117116
to create an image classification client that can be used to perform inference with models being exposed by the server. For example:
118117
```shell script
119118
$ python jpeg_classification.py --grpc_port 8080 --grpc_address 1.2.3.4 --input_name data --output_name prob

docs/architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515
**<div align="center">Figure 1: Docker Container (VM or Bare Metal Host)</div>**
1616

17-
- OpenVINO&trade; Model Server requires the models to be present in the local file system or they could be hosted remotely on object storage services. Both Google Cloud Storage and S3 compatible storage are supported. Refer to [Preparing the Models Repository](./models_repository.md) for more details.
17+
- OpenVINO&trade; Model Server requires the models to be present in the local file system or they could be hosted remotely on object storage services. Google Cloud, S3 and Azure compatible storage is supported. Refer to [Preparing the Models Repository](./models_repository.md) for more details.
1818

1919
- OpenVINO&trade; Model Server is suitable for landing in Kubernetes environment. It can be also hosted on a bare metal server, virtual machine or inside a docker container.
2020

docs/ovms_quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# OpenVINO&trade; Model Server Quickstart
22

3-
The OpenVINO Model Server requires a trained model in Intermediate Representation (IR) format on which it performs inference. Options to download appropriate models include:
3+
The OpenVINO Model Server requires a trained model in Intermediate Representation (IR) or ONNX format on which it performs inference. Options to download appropriate models include:
44

55
- Downloading models from the [Open Model Zoo](https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/)
66
- Using the [Model Optimizer](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) to convert models to the IR format from formats like TensorFlow*, ONNX*, Caffe*, MXNet* or Kaldi*.

0 commit comments

Comments
 (0)