Intel® Tiber™ Developer Cloud for oneAPI

Overview Get Started Documentation Forum external link

Before you begin, please consider reading the Get Started with the Intel® oneAPI Base Toolkit on the Intel® DevCloud document if you haven’t already done so.

Deep learning frameworks offer optimized building blocks to streamline designing, training, and validating deep neural networks through a high-level programming interface.

  • Build or customize deep learning frameworks using optimized deep learning libraries
  • Optimized for high performance on Intel CPUs and GPUs
  • Target single-node or multi-node distributed processing with common APIs that allow scaling across nodes

The following sample can be executed either using a JupyterLab* session or the SSH terminal. For more information on how to connect to the Intel DevCloud using either Jupyter or SSH please visit https://devcloud.intel.com/oneapi/get_started/.

Jupyter

Juypter connection
  1. Connect with JupyterLab
  2. Open a terminal.
  3. Download the samples.
    git clone https://github.com/oneapi-src/oneAPI-samples.git
  4. Go to the sample location.
    cd ~/oneAPI-samples/Libraries/oneDNN/tutorials
  5. Open the Jupyter based get started guide and follow along.
    tutorial_getting_started.ipynb

SSH Terminal

Juypter ssh

Download the Intel® oneAPI Base Toolkit Samples

  1. Connect to the DevCloud.
    ssh devcloud
  2. Download the samples.
    git clone https://github.com/oneapi-src/oneAPI-samples.git

Create the job scripts

  1. Go to the sample location.
    cd ~/oneAPI-samples/Libraries/oneDNN/getting_started
  2. Create a build.sh script with the following contents.
    #!/bin/bash
    source /opt/intel/inteloneapi/setvars.sh > /dev/null 2>&1
    mkdir dpcpp
    cd dpcpp
    cmake ..
    make
  3. Create a run.sh script with the following contents for executing the sample.
    #!/bin/bash
    source /opt/intel/inteloneapi/setvars.sh > /dev/null 2>&1
    ./build/bin/getting-started-cpp cpu
    ./build/bin/getting-started-cpp gpu

Build

  1. Build the sample on a gpu node.
    qsub -l nodes=1:gpu:ppn=2 -d . build.sh

Run

  1. Run the sample on a gpu node.
    qsub -l nodes=1:gpu:ppn=2 -d . run.sh

Monitor jobs

  1. In batch mode, the commands return immediately; however, the job itself may take longer to complete. In order to inspect the job progress, use the qstat utility.
    watch -n 1 qstat -n -1

    Note: The watch -n 1 command is used to run qstat -n -1 and display its results every second.

Check the results

  1. Upon completion, the stderr and stdout of the job are written to the disk:
    <script_name>.sh.eXXXX, which is the job stderr
    <script_name>.sh.oXXXX, which is the job stdout

    Here XXXX is the job ID, which gets printed to the screen after each qsub command.

  2. Inspect the output of the sample.
    cat run.sh.oXXXX
  3. Remove the stdout and stderr files.
    rm run.sh.*

Additional information

More information about the architecture of the Intel® DevCloud and about the Portable Batch System can be found at the following locations: