ONNX Explained

Introduction 

 
Open Network Exchange Format known as ONNX, https://onnx.ai/, is an open ecosystem that empowers AI developers to make the best choice of tools that their project involves.
 
ONNX is the result of working AWS, Facebook, and Microsoft to allow the transfer of deep learning models between different frameworks.
 
Data Scientists use multiples of frameworks to develop deep learning algorithms like Caffe2, PyTorch, Apache, MXNet, Microsoft cognitive services Toolkit, and TensorFlow. The choice of the frameworks depends on many constraints (existing developments, team skills…)
 
These new operational challenges, which slow down the start-up phase, are constantly appearing as more and more suppliers are trying to find solutions to break the deadlock.
 
Install ONNX
 
First, build protobuf locally through cloning the GitHub project
  1. git clone https://github.com/protocolbuffers/protobuf.git  
  2. cd protobuf  
  3. git checkout 3.9.x  
  4. cd cmake  
  5. # Explicitly set -Dprotobuf_MSVC_STATIC_RUNTIME=OFF to make sure protobuf does not statically link to the runtime library  
  6. cmake -G "Visual Studio 15 2017 Win64" -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir>  
  7. msbuild protobuf.sln /m /p:Configuration=Release  
  8. msbuild INSTALL.vcxproj /p:Configuration=Release 
Second, build your ONNX project
  1. # Get ONNX    
  2. git clone https://github.com/onnx/onnx.git     
  3. cd onnx    
  4. git submodule update --init --recursive    
  5. # Set environment variables to find protobuf and turn off static linking of ONNX to the runtime library.    
  6. # Even better option is to add it to user\system PATH so this step can be performed only once.    
  7. # For more details check https://docs.microsoft.com/en-us/cpp/build/reference/md-mt-ld-use-run-time-library?view=vs-2017    
  8. set PATH=<protobuf_install_dir>\bin;%PATH%    
  9. set USE_MSVC_STATIC_RUNTIME=0    
  10. # Optional : Set environment variable `ONNX_ML=1` for onnx-ml     
  11. # Build ONNX    
  12. python setup.py install   
Third, run ONNX
  1. python -c "import onnx" 
Finally, test installation:
  1. pip install pytest nbval 
ONNX Runtime
 
This is a new alternative that supports CUDA, MLAS, MKL-DNN for computer acceleration. It was released as a python package (onnxruntime-gpu has been released to support GPUs and onnxruntime is a CPU target release)