Spaces:
Running
A newer version of the Gradio SDK is available:
5.29.0
Code deployment
Table of contents
Introduction
This code deployment .zip archive contains:
Inference model(s) for your Intel® Geti™ project.
A sample image or video frame, exported from your project.
A very simple code example to get and visualize the result of inference for your project, on the sample image.
Jupyter notebooks with instructions and code for running inference for your project, either locally or via the OpenVINO Model Server (OVMS).
The deployment holds one model for each task in your project, so if for example
you created a deployment for a Detection -> Classification
project, it will consist of
both a detection, and a classification model. The Intel® Geti™ SDK is used to run
inference for all models in the project's task chain.
This README describes the steps required to get the code sample up and running on your machine.
Prerequisites
- Python 3.9, 3.10 or 3.11
- [Optional, only for OVMS notebook] Docker
Installation
Install prerequisites. You may also need to install pip. For example, on Ubuntu execute the following command to install Python and pip:
sudo apt install python3-dev python3-pip
If you already have installed pip before, make sure it is up to date by doing:
pip install --upgrade pip
Create a clean virtual environment:
One of the possible ways for creating a virtual environment is to use
virtualenv
:python -m pip install virtualenv python -m virtualenv <directory_for_environment>
Before starting to work inside the virtual environment, it should be activated:
On Linux and macOS:
source <directory_for_environment>/bin/activate
On Windows:
.\<directory_for_environment>\Scripts\activate
Please make sure that the environment contains wheel by calling the following command:
python -m pip install wheel
NOTE: On Linux and macOS, you may need to type
python3
instead ofpython
.In your terminal, navigate to the
example_code
directory in the code deployment package.Install requirements in the environment:
python -m pip install -r requirements.txt
(Optional) Install the requirements for running the
demo_notebook.ipynb
ordemo_ovms.ipynb
Juypter notebooks:python -m pip install -r requirements-notebook.txt
Usage
Local inference
Both demo.py
script and the demo_notebook.ipynb
notebook contain a code sample for:
Loading the code deployment (and the models it contains) into memory.
Loading the
sample_image.jpg
, which is a random image taken from the project you deployed.Running inference on the sample image.
Visualizing the inference results.
Inference with OpenVINO Model Server
The additional demo notebook demo_ovms.ipynb
shows how to set up and run an OpenVINO
Model Server for your deployment, and make inference requests to it. The notebook
contains instructions and code to:
Generate a configuration file for OVMS.
Launch an OVMS docker container with the proper configuration.
Load the image
sample_image.jpg
, as an example image to run inference on.Make an inference request to OVMS.
Visualize the inference results.
Running the demo script
In your terminal:
Make sure the virtual environment created above is activated.
Make sure you are in the
example_code
directory in your terminal.Run the demo using:
python demo.py
The script will run inference on the sample_image.jpg
. A window will pop up that
displays the image, and the results of the inference visualized on top of it.
Running the demo notebooks
In your terminal:
Make sure the virtual environment created above is activated.
Make sure you are in the
example_code
directory in your terminal.Start JupyterLab using:
jupyter lab
This should launch your web browser and take you to the main page of JupyterLab.
Inside JuypterLab:
In the sidebar of the JupyterLab interface, double-click on
demo_notebook.ipynb
ordemo_ovms.ipynb
to open one of the notebooks.Execute the notebook cell by cell to view the inference results.
NOTE The
demo_notebook.ipynb
is a great way to explore theAnnotationScene
object that is returned by the inference. The demo code only has very basic visualization functionality, which may not be sufficient for all use case. For example if your project contains many labels, it may not be able to visualize the results very well. In that case, you should build your own visualization logic based on theAnnotationScene
returned by thedeployment.infer()
method.
Troubleshooting
If you have access to the Internet through a proxy server only, please use pip with a proxy call as demonstrated by the command below:
python -m pip install --proxy http://<usr_name>:<password>@<proxyserver_name>:<port#> <pkg_name>
If you use Anaconda as environment manager, please consider that OpenVINO has limited Conda support. It is still possible to use
conda
to create and activate your python environment, but in that case please use onlypip
(rather thanconda
) as a package manager for installing packages in your environment.If you have problems when you try to use the
pip install
command, please update pip version as per the following command:python -m pip install --upgrade pip
Package contents
The code deployment files are structured as follows:
- deployment
project.json
- "<title of task 1>"
- model
model.xml
model.bin
config.json
- python
- model_wrappers
__init__.py
- model_wrappers required to run demo
README.md
LICENSE
demo.py
requirements.txt
- model_wrappers
- model
- "<title of task 2>" (Optional)
- ...
- example_code
demo.py
demo_notebook.ipynb
demo_ovms.ipynb
README.md
requirements.txt
requirements-notebook.txt
sample_image.jpg
LICENSE