Introduction
Open Network Exchange Format known as ONNX,
https://onnx.ai/, is an open ecosystem that empowers AI developers to make the best choice of tools that their project involves.
ONNX is the result of working AWS, Facebook, and Microsoft to allow the transfer of deep learning models between different frameworks.
Data Scientists use multiples of frameworks to develop deep learning algorithms like Caffe2, PyTorch, Apache, MXNet, Microsoft cognitive services Toolkit, and TensorFlow. The choice of the frameworks depends on many constraints (existing developments, team skills…)
These new operational challenges, which slow down the start-up phase, are constantly appearing as more and more suppliers are trying to find solutions to break the deadlock.
Install ONNX
First, build protobuf locally through cloning the GitHub project
- git clone https://github.com/protocolbuffers/protobuf.git
- cd protobuf
- git checkout 3.9.x
- cd cmake
-
- cmake -G "Visual Studio 15 2017 Win64" -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir>
- msbuild protobuf.sln /m /p:Configuration=Release
- msbuild INSTALL.vcxproj /p:Configuration=Release
Second, build your ONNX project
-
- git clone https://github.com/onnx/onnx.git
- cd onnx
- git submodule update --init --recursive
-
-
-
- set PATH=<protobuf_install_dir>\bin;%PATH%
- set USE_MSVC_STATIC_RUNTIME=0
-
-
- python setup.py install
Third, run ONNX
Finally, test installation:
ONNX Runtime
This is a new alternative that supports CUDA, MLAS, MKL-DNN for computer acceleration. It was released as a python package (onnxruntime-gpu has been released to support GPUs and onnxruntime is a CPU target release)