What is Intel Optimizer Model Optimizer? And why it is used?
The Model Optimizer is a Python-based command-line tool for importing trained models from popular deep learning frameworks such as Caffe, TensorFlow, Apache MXNet, ONNX, and Kaldi.
The Model Optimizer is a key component of the Intel® OpenVINO™ Toolkit. You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network.
An IR is a special type of file that an Intel® OpenVINO™ Toolkit uses as an input, and all the processing, post-training fine tunings, and optimizations are done using this IR form.
It consists of two types of files,
The model optimizer is primarily used to convert the Caffe, TensorFlow, Apache MXNet, ONNX, and Kaldi models into the IR form. To convert Apple Core ML models to IR form, you need to convert them to ONNX format.
To read more, visit