OpenVINO Model Server 2019 R1
There is now changed the naming convention for OpenVINO Model Server versions. It became consistent with OpenVINO SDK release names. It should be now easier to map which version of OVMS is using which inference engine backend.
In OpenVINO Model Server 2019 R1 there is introduced support for Inference Engine in version 2019 R1.
Refer to OpenVINO-RelNotes to learn more about introduced improvements. Most important enhancements are:
- Added support for many new operations in ONNX*, TensorFlow* and MXNet* frameworks. Topologies like Tiny YOLO v3, full DeepLab v3, bi-direction
- More than 10 new pre-trained models are added including gaze estimation, action recognition encoder/decoder, text recognition, instance segmentation networks to expand to newer use cases.
- Improved support for Low-Precision 8-bit Integer inference
- upgraded mkl-dnn version to v0.18
- Added support for many new layers, activation types and operations.
Exemplary grpc client has now option to transpose the input data in two directions: NCHW>NHWC and NHWC>NCHW.
Special kudos for @joakimr-axis for his contribution is dockerfiles cleanup and enhancements.
You can use a public docker image based on Intel python base image via a command:
docker pull intelaipg/openvino-model-server:2019.1