Skip to content

open-edge-platform/model_api

OpenVINO Model API

PyPI Downloads

openvino

Pre-Merge Test Build Docs

License

Introduction

Model API is a set of wrapper classes for particular tasks and model architectures, simplifying data preprocess and postprocess as well as routine procedures (model loading, asynchronous execution, etc.). It is aimed at simplifying end-to-end model inference for different deployment scenarios, including local execution and serving. The Model API is based on the OpenVINO inference API.

How it works

Model API searches for additional information required for model inference, data, pre/postprocessing, label names, etc. directly in OpenVINO Intermediate Representation. This information is used to prepare the inference data, process and output the inference results in a human-readable format.

Currently, ModelAPI supports models trained in OpenVINO Training Extensions framework. Training Extensions embed all the metadata required for inference into model file. For models coming from other than Training Extensions frameworks metadata generation step is required before using ModelAPI.

Supported model formats

Features

  • Python and C++ API
  • Synchronous and asynchronous inference
  • Local inference and serving through the rest API (Python only)
  • Model preprocessing embedding for faster inference

Installation

pip install openvino-model-api

Usage

from model_api.models import Model

# Create a model wrapper from a compatible model generated by OpenVINO Training Extensions
# Use URL to work with OVMS-served model, e.g. "localhost:9000/models/ssdlite_mobilenet_v2"
model = Model.create_model("model.xml")

# Run synchronous inference locally
result = model(image)  # image is numpy.ndarray

# Print results in model-specific format
print(f"Inference result: {result}")

Prepare a model for InferenceAdapter

There are usecases when it is not possible to modify an internal ov::Model and it is hidden behind InferenceAdapter. For example the model can be served using OVMS. create_model() can construct a model from a given InferenceAdapter. That approach assumes that the model in InferenceAdapter was already configured by create_model() called with a string (a path or a model name). It is possible to prepare such model:

model = DetectionModel.create_model("~/.cache/omz/public/ssdlite_mobilenet_v2/FP16/ssdlite_mobilenet_v2.xml")
model.save("serialized.xml")

For more details please refer to the examples of this project.

About

Run Computer Vision AI models with simple Python API and using OpenVINO Runtime

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 17

Languages