Register Sign in

Overview of Intel® Distribution of OpenVINO™ Toolkit

Intel® Distribution of OpenVINO™ Toolkit – Overview

The Intel® DevCloud for the Edge comes preinstalled with the Intel® Distribution of OpenVINO™ toolkit to help developers run inference on a range of compute devices. OpenVINO™ is a toolkit designed to accelerate the development of applications and solutions that emulate human vision. Based on convolutional neural networks (CNN), the Intel® Distribution of OpenVINO™ toolkit shares workloads across Intel® hardware (including accelerators) to maximize performance.

The Intel® Distribution of OpenVINO™ toolkit includes:

  • A model optimizer to convert models from popular frameworks such as Caffe, TensorFlow, ONNX and Kaldi.
  • An inference engine that supports heterogeneous execution across computer vision accelerators from Intel, which include CPUs, GPUs, FPGAs and the Neural Compute Stick 2 (NCS2).
  • Common API for heterogeneous Intel® hardware.

Core Flow

The basic work flow is:

  1. Use a tool, such as Caffe, to create and train a CNN inference model.
  2. Run the model through Model Optimizer to produce an optimized Intermediate Representation (IR) stored in files (.bin and .xml) for use with the Inference Engine.
  3. The User Application then loads and runs the model on targeted devices using the Inference Engine and the IR files.
openvino basic workflow

For complete information, see the Documentation for the Intel Distribution of OpenVINO Toolkit.

Up Next:

Learn more about the Intel® DevCloud environment.