Introduction
With this article, we will start discussing Intel OpenVINO™ (Open Visual Interface for Neural Network Optimization) Toolkit. It is mainly used for post-training optimization and fine-tuning of the parameters of the model.
Prerequisites
- Python
- Basic Knowledge of Machin Learning
To learn about machine learning, you can take C# Corner Machine Learning Learn Series,
here.
Intel® OpenVINO™
The OpenVINO™ Toolkit’s name comes from “Open Visual Inferencing and Neural Network Optimization”. It is largely focused on optimizing neural network inference and is open source.
It is developed by Intel® and supports quick inference through Intel® CPUs, GPUs, FPGAs, and a common API. OpenVINO™ may use its Model Optimizer to optimize inference models built with multiple different frames, such as TensorFlow or Caffe. Then, with this streamlined configuration, the Inference Engine may be used to speed up inference on the relevant hardware. There is already a broad variety of pre-trained models already deployed with Platform Optimizer.
OpenVINO™ helps you to operate on the edge by controlling device pace and scale. It does not mean that inference precision is increased-this will be achieved earlier through training. The smaller and faster OpenVINO™ versions are perfect for fewer power applications along with the hardware enhancements it offers. For eg, the usage of several GPUs and limitless memory space to operate apps does not come with an IoT unit.
OpenVINO™ Toolkit
- Enables deep learning inference on CNN-based edge devices
- Supports heterogeneous execution across an Intel® CPU (Central Processing Unit), Intel® Integrated Graphics Processing Unit (IGPU), Intel® FPGA (Field Programmable Gate Array), Intel® Neural Compute Stick 2 (NCS2), and Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
- With the use of an easy-to-use library of computer vision functions and preoptimized kernels, it speeds time-to-market
- Includes optimized calls for computer vision standards, including OpenCV* and OpenCL™
System Requirements
- 6th to 10th generation Intel® Core™ processors and Intel® Xeon® processors
- Intel® Xeon® processor E family (formerly codenamed Sandy Bridge, Ivy Bridge, Haswell, and Broadwell)
- 3rd generation Intel® Xeon® Scalable processor (formerly codenamed Cooper Lake)
- Intel® Xeon® Scalable processor (formerly Skylake and Cascade Lake)
- Intel Atom® processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
- Intel Pentium® processor N4200/5, N3350/5, or N3450/5 with Intel® HD Graphics
- Intel® Neural Compute Stick 2
- Intel® Vision Accelerator Design with Intel® Movidius™ VPUs
Note
I used Intel Core i3 7th Generation PC, so it will work for a low configuration system as well.
Installing Intel® OpenVINO™ toolkit on Linux
I am using Linux, you may use Windows or Mac OS X, or Rasbian. To get the steps for these, please
visit.
- Download the Intel® OpenVINO™ Toolkit installation package for Linux operating system from here.
- Sign in with your Intel account to download the installation file,
- Open the command prompt and navigate to the Download folder using the following command:
By default, the file is saved as l_openvino_toolkit_p_<version>.tgz
- Unpack the file using,
- tar -xvzf l_openvino_toolkit_p_<version>.tgz
- Go to the l_openvino_toolkit_p_<version> directory,
- cd l_openvino_toolkit_p_<version>
If you have a previous version of the Intel Distribution of OpenVINO toolkit installed, rename or delete these two directories,
- ~/inference_engine_samples_build
- ~/openvino_models
- To start the installation process execute the following command,
- After executing the above command, a pop-up will open, follow the instruction on your screen.
- Then you will get two options, one is to continue with default installation and another is to go with customized installation.
- If you go with default installation the screen will look like the following:
- If you choose to go with a customized installation, the screen will look like the following:
When installed as root the default installation directory for the Intel Distribution of OpenVINO is /opt/intel/openvino_<version>/.
- Click install to start the installation process.
- Post that at the end of the installation the screen will look like this:
- Once the basic installation is completed, we will set up the system to be able to run OpenVINO. OpenVINO requires some additional dependency to be installed on your system.
- Run the following command to change the directory:
- cd /opt/intel/openvino/install_dependencies
- Execute the following command to run the script to install all the external dependencies:
- sudo -E ./install_openvino_dependencies.sh
- After that, the next step is to set up the environment variables of the system. Execute the following command:
- source /opt/intel/openvino/bin/setupvars.sh
Now your system is all set to run Intel OpenVINO Toolkit applications.
To see the instructions for other devices please visit:
Model Optimizer
The Model Optimizer is a Python*-based command-line tool for importing trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, Apache MXNet*, ONNX*, and Kaldi*.
The Model Optimizer is a key component of the Intel® OpenVINO™ Toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network.
An IR is a special type of file that an Intel® OpenVINO™ Toolkit uses as an input, and all the processing, post-training fine tunings, and optimizations are done using this IR form.
It consists of two types of files:
- .xml: Describes the network topology
- .bin: Contains the weights and biases in the form of binary data
The model optimizer is primarily used to convert the Caffe*, TensorFlow*, Apache MXNet*, ONNX*, and Kaldi* models into the IR form.
You can either install Model Optimizer for an individual model scheme or you can also install a universal Model Optimizer which can convert any model scheme to IR form.
To install Universal Model Optimizer
Go to the Model Optimizer prerequisites directory:
- cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
Run the following script:
- sudo ./install_prerequisites.sh
To install scheme specific Model Optimizer
Go to the Model Optimizer prerequisites directory:
- cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites
- Caffe
- sudo ./install_prerequisites_caffe.sh
- Tensflow
- sudo ./install_prerequisites_tf.sh
- MXNet
- sudo ./install_prerequisites_mxnet.sh
- ONNX
- sudo ./install_prerequisites_onnx.sh
- Kaldi
- sudo ./install_prerequisites_kaldi.sh
Running the First Application
After you have followed all steps, you are all set to run your first Intel® OpenVINO™ Toolkit application.
The demo application is an Image classification application that uses SqueezeNet Model. The application takes an image as input. You can provide input through the webcam or live camera feed using the MQTT server (I will be discussing the whole process in the coming articles). I am providing a .png image.
Post building the application, you will see a 10-class image classifier, with labels and their corresponding confidence values. You can view details about the application
here.
To run your first demo application, follow the below steps.
Go to the Inference Engine demo directory:
- cd /opt/intel/openvino/deployment_tools/demo
Run the Image Classification verification script:
- ./demo_squeezenet_download_convert_run.sh
Sample output is as follows:
Now after that execute the following command:
- ./demo_security_barrier_camera.sh
Close the image viewer window to complete the execution.
There are 2 more demo applications present in the demo folder. If you wish you can try out these as well.
Security Barrier Camera Demo
The prototype portrays vehicle perception, vehicle characteristics, and identification functions utilizing the Inference Engine for pre-trained models. The sample should be executed in GUI mode as the visual output is generated. The prototype code demonstrates the corresponding picture with bounding boxes and text detections.
Benchmark Demo Using SqueezeNet
The demonstration illustrates how to test the deep learning inferences output of supported devices with the benchmark framework. The test software prints the latency and throughput values of the output counters.
Conclusion
In this article, we got to know about the basics of Intel® OpenVINO™ Toolkit and then we learned to install and configure Intel® OpenVINO™ Toolkit on our local system. In the coming articles, I will go deep into Intel® OpenVINO™ Toolkit and build some projects as well.
I hope you understood this article, for any doubts and clarification, please feel free to comment below.