Intel® DevCloud for the Edge — Guide


Intel® DevCloud for the Edge is a cloud-based service designed to help developers to prototype and experiment with computer vision applications using the Intel® Distribution of OpenVINO™ toolkit. Once registered, developers can access the tutorial series for Python and C++ based Jupyter* Notebooks, as well as various sample solutions that can be run directly from a web browser. Developers also have the option of creating their own Jupyter* Notebooks and trying them out on a range of Intel® hardware solutions designed specifically for deep learning inferencing.

Intel® DevCloud for the Edge provides you with access to everything you need to start working with sample applications, prototypes, and tutorials. This includes pre-trained models, source code, test input images, video, and data streams. You can use any of the pre-trained deep learning models available through the Intel® Distribution of OpenVino™ toolkit or upload your own pre-trained deep-learning models to help you to develop and test your own computer vision applications.

Benefits of The Intel® DevCloud for the Edge

  1. Quicker to access comprehensive Intel® development solutions, hardware, and software for deep learning and computer vision application development. All you need is an internet connection and an account.
  2. Access to fully configured physical edge machines pre-installed with the Intel® Distribution of OpenVINO™ toolkit (CPU, iGPU, VPU, and FPGA) hosted in the cloud and powered by Intel® Xeon® Scalable processors.
  3. Try before you buy! With Intel® DevCloud for the Edge you can to evaluate various Intel® hardware acceleration option for your application before making any commitments.
  4. Access to a vast library of pre-trained models from the Intel® Distribution of OpenVINO™ toolkit and the ability to upload your own custom pre-trained models to evaluate the best framework, topology, and hardware acceleration solution for your application needs.

Introductory Tutorials

Try the tutorials in the Get Started section to learn about classification, object detection, and style transfer. These examples will run on the development machine CPU. To access other accelerator hardware, follow the Sample Applications in the Advanced section of DevCloud.

To explore the image classification, object detection, and style transfer samples, click Get Started → Tutorials.

These Jupyter* Notebook tutorials cover basic deep learning concepts using samples distributed with the Intel® Distribution of OpenVINO™ toolkit.

main interface

Start a tutorial by clicking on an icon or a Try it out link to open the Jupyter* Notebook in a separate browser tab.

Advanced Sample Applications

User Workflow

All reference samples in the Advanced section of DevCloud run in a Jupyter* Notebook environment on the development server. This server is powered by an Intel® Xeon® Scalable processor. After submitting your scripts, the Jupyter* Notebook puts the scripts into a job queue to run inference on hosted edge compute server. You can select from a range of hardware acceleration options, such as GPU, VPU, or FPGA.

The figure below illustrates the user workflow for code development, job submission, and viewing results.

how it works diagram

STEPS 1 & 2: Development

  • Edit code from the reference samples provided in the Jupyter* Notebook.
  • Learn and run code step-by-step using OpenVINO™ to download pre-trained models. Prepare the models using the Model Optimizer.
  • Set up the path for the pre-recorded video or live stream.

STEPS 3 & 4: Job Submission

  • Learn about the various hardware options available for use as edge accelerators: CPU, GPU, VPU, or FPGA.
  • Submit your application to the job queue to run inference on a specific edge compute node or on multiple edge compute nodes running simultaneously.

STEPS 5 & 6: Results

  • View your inference results in the Jupyter* Notebook.
sample applications in menu

Our reference samples feature a number of real-world use cases and are organized by market application. Access reference samples by clicking on Advanced → Sample Applications.

For example, the Safety Gear Detection reference sample provides a demonstration of inference being run on Intel® DevCloud for the Edge.

  1. Click on the Icon or the Try It Out link to open the Jupyter* Notebook example in a separate browser tab. safety gear sample
  2. Follow the instructions in the Jupyter* Notebook to experiment with the performance of various target accelerator solutions and to compare the results. More details about each sample application can be found in the Jupyter* Notebook.

Summary of Safety Gear Sample Application

Objective: Detect people in a pre-recorded video stream and determine whether they are wearing appropriate safety gear, such as safety helmet, vest, etc.

Step 0: Import the required Python packages to run the application and to view the original video without inference.

Step 1: Shows the developer, the steps involved to understand

  • How to use the OpenVINO® toolkit’s model optimizer to download/convert the pre-trained raw models to an IR (Intermediate Representation) format
  • Details on how to run inference on a single frame of the video with bounding boxes,
  • At the end of this section, the developer should see a single image with bounding boxes detecting the security gear (safety helmet, safety vest) as shown below. (Important to note, we are still running this part of the application on the Development server (CPU only) safety gear detection in an image

Step 2: Perform inference on the entire video clip by submitting jobs to a job server which will schedule the job requests on a queue and execute on the requested target hardware as soon as they are available.

  • Developer will learn how to write a job file
  • How to determine using the ‘pbsnodes’ command to determine the type of hardware (Edge Unit make, CPU Generation, hardware acceleration etc) available to run the job?
  • Ability to change pre-recorded stream by changing the ‘os.environ’ path to video.
  • How to submit the job to run on an Edge compute node on a specific CPU, iGPU, VPU or FPGA or all of the above hardware simultaneously.
  • After the jobs execute on the Edge, the developer will be able to vView the results of each of the hardware after job execution
  • Display a summary of Inference Engine processing time and Inference Engine fps processing time chart fps chart

Above graphs show performance data generated when the final block of code in the notebook, Step 4 – Assess Performance, is run.

Connect and Create

Once you have completed this exercise, you are ready to create your own application. Click Advanced → Connect and Create

connect and create in menu

Click on My Files. This will launch the Jupyter* Hub and show a file explorer view of your account in the Intel® DevCoud within your browser.

In Jupyter* Hub, click the New dropdown button and choose Terminal.

jupyter new notebook or terminal

Click on My Notebooks

When the terminal instance has launched, create a new directory and clone the GitHub repo.

mkdir MyProjects
cd MyProjects
git clone https://github.com/<my-organization>/<my-repo>.git
cd <my-repo>

Here, <my-organization> and <my-repo> are place holders; replace them with your actual GitHub* repository URL. Now, you can view the code that you cloned from GitHub* in Jupyter* under ~/MyProjects/<my-repo>.

Technical Support

Technical support is available through the Intel® DevCloud for the Edge Forum.

Click on Forum on the top right corner. This will open the Intel® Forum page in a separate browser tab. If you don’t have an Intel® IDZ account already, you will need to create one. Registration is quick, easy, and free.

devcloud forum

Ready to give it a whirl?

Roll up your sleeves with these step-by-step instructions on using computer vision at the edge with these Jupyter* Notebook tutorials.