Register Sign in

Intel® DevCloud for the Edge — Overview

Intel® DevCloud for the Edge Overview

The Intel® DevCloud for the Edge allows you to actively prototype and experiment with AI workloads for computer vision on Intel hardware. You have full access to hardware platforms hosted in our cloud environment, designed specifically for deep learning. You can test your model’s performance using the Intel® Distribution of OpenVINO™ toolkit, as well as various CPU, GPU, and VPU combinations, such as the Intel® Neural Compute Stick 2 (NCS2), and FPGAs, such as the Intel® Arria® 10. The DevCloud contains a series of Jupyter* Notebook tutorials and examples preloaded with everything you need to get up and running quickly. This includes pretrained models, sample data, and executable code from the Intel® Distribution of OpenVINO™ toolkit, as well as other tools for deep learning. These notebooks are designed to help you learn how to implement deep-learning applications to enable compelling, high-performance solutions.

The Intel® DevCloud does not require any hardware setup at your end. DevCloud uses Jupyter* Notebooks, a browser-based development environment that enables you to run code from within your browser and to visualize results instantly. With Intel® DevCloud and Jupyter*, you can prototype your innovative computer vision solutions in our cloud environment, and watch your code run on any combination of our available hardware resources.

The Intel® DevCloud for the Edge consists of:

  • Development nodes, used to develop code and submit compute jobs.
  • Edge nodes to run and test edge workloads.
  • Storage servers that provide a network-shared filesystem, meaning that your data is accessible from the same path from any machine in the cloud.
  • Queue server to submit compute jobs to edge nodes.
  • UI software that allows you to access the Intel® DevCloud resources from a web browser.
how it works diagram

How It Works Video

 

This video gives an overview of how Intel® DevCloud for the Edge and OpenVINO™ toolkit help developers of computer vision applications get the most out of their applications and pick the perfect hardware for each task.

Hardware Available

The DevCloud hosts edge compute platforms and devices for prototyping and testing computer vision applications. With Intel® DevCloud for the Edge, you need only specify your preferred hardware platform or combination of CPU, GPU, VPU, FPGA architectures, submit your compute vision job, then view real-time performance results. Our tutorials and samples provide you with the code needed to develop applied computer vision use cases on accelerated hardware. Alternatively, you can create your own Jupyter* Notebook and test the results before investing in your own hardware solution.

When your code is ready, you can run it on a development server CPU or send it to a cluster of one or more edge compute hardware devices on the Intel® DevCloud for accelerated inferencing. What you learn from these experiments can help you navigate potential pitfalls, optimize performance, identify appropriate hardware, and accelerate your time to market.

Edge compute nodes in the Intel® DevCloud come equipped with a CPU, which will usually include an integrated HD graphics processor. Some nodes will also have computing accelerators connected over the PCIe or USB bus. We refer to CPUs, graphics processors, and accelerators collectively as "compute devices". These devices can be used for ML inferencing with the help of the OpenVINO™ toolkit.

When you study the performance of edge computing solutions in the DevCloud, you can target specific compute devices by (i) requesting a node with that accelerator and (ii) using the OpenVINO™ IEPlugin corresponding to that compute device. However, be aware note that an application targeting a particular compute device may perform differently in different edge node groups, because in heterogeneous systems (CPU+accelerator), the performance of an inference application may depend on both the CPU and the accelerator.

List of Devices

ProcessorRAMIntegrated GPUInference Accelerator
Intel® Atom® x5-E3940 4 GB Intel® HD Graphics 500
Intel® Atom® x7-E3950 2 GB Intel® HD Graphics 505
Intel® Atom® x7-E3950 4 GB Intel® HD Graphics 505
Intel® Atom® x7-E3950 4 GB Intel® HD Graphics 505 Intel® Movidius™ Myriad™ X based Intel® Vision Accelerator Design cards – x1
Intel® Core™ i5 6500TE 8 GB Intel® HD Graphics 530
Intel® Core™ i5 6500TE 8 GB Intel® HD Graphics 530 Intel® Movidius™ Myriad™ X based Intel® Vision Accelerator Design cards – x8
Intel® Core™ i5 6500TE 8 GB Intel® HD Graphics 530 Intel® Arria® 10 FPGA
Intel® Core™ i5 6500TE 8 GB Intel® HD Graphics 530 Intel® Movidius™ Myriad™ X based Intel® Vision Accelerator Design cards – x1
Intel® Core™ i5 6442EQ 8 GB Intel® HD Graphics 530
Intel® Core™ i5 7500T 8 GB Intel® HD Graphics 630
Intel® Core™ i5 7500 8 GB Intel® HD Graphics 630
Intel® Core™ i7 8665UE 16 GB Intel® UHD Graphics 620 Intel® Movidius™ Myriad™ X based Intel® Vision Accelerator Design cards – x2
Intel® Core™ i5 8365UE 8 GB Intel® UHD Graphics 620
Intel® Core™ i7 8665UE 16 GB Intel® UHD Graphics 620
Intel® Xeon® E3 1268L v5 32 GB Intel® HD Graphics 505
Intel® Xeon® Gold 5120 384 GB
Intel® Xeon® Bronze 3206R 48 GB
Intel® Xeon® Silver 4214R 48 GB
Intel® Xeon® Gold 5220R 96 GB
Intel® Xeon® Gold 6258R 96 GB
Intel® Xeon® E-2286M Processor 32 GB Intel® HD Graphics 630
Intel® Core™ i7-10710U Processor 16 GB Intel® UHD Graphics
Intel® Core™ i7-1065G7 Processor 16 GB Intel® Iris Plus Graphics G7

Intel, the Intel logo, Intel Atom, Intel Core, Intel Xeon, Movidius and Myriad and are trademarks of Intel Corporation or its subsidiaries.

How to Request

To request an edge compute node with a particular device, submit a job request for a node with a property listed in the "Queue Label" column. For example:

qsub -l nodes=1:i5-6500te myscript.sh

This command ensures that the target system will have an Intel Core i5 6500TE CPU; however, it does not specify the type of edge compute node. Therefore, to specify the type of system you want for your job, ensure that you request a property that corresponds to your desired edge node group, as explained here.

How to Use

To use the compute device from an application using OpenVINO™, initialize the IEPlugin object with the device argument listed in the "OpenVINO device" column. For example, in Python, the line for initializing the plugin would look something like this:

from openvino.inference_engine import IEPlugin
plugin = IEPlugin(device="CPU")

Floating-Point Models

The floating-point models available for each device are listed in the column "FP model". When you run the model optimizer, you should use the argument --data-type set to one of the floating-point models available for your chosen device, e.g.:

/opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model ... --data_type FP32

About Edge Computing

The term Edge Computing is used to refer to the placement of compute resources close to or embedded within smart endpoint devices. A wide range of AI solutions take advantage of edge computing to reduce latency, improve availability, and to manage data privacy. Some examples of these solutions include: autonomous driving, retail analytics, security, and industrial automation. Learn more about Intel's Edge Solutions.

Up Next:

Learn more about using the Intel® DevCloud