Fundamentals of Accelerated Computing with CUDA Python

Fundamentals of Accelerated Computing with CUDA Python

Nvidia-fundamentals-cuda-python
  • Date de début 29 octobre 2020 - 09:00
  • Durée 8 hours
  • Lieu Digital training
  • Langue Anglais
  • Prix HT 600.00 
S'inscrire

Contexte de la formation

In the context of Deep Learning, the ICT Cluster organises workshops in collaboration with NVIDIA Deep Learning Institute. A first one was organized at the end of 2019: Fundamentals of Deep Learning for Computer Vision

This year a second workshop pushes you to take a step further:

GPUs have made a revolution in computational performance recently. However, using a GPU to accelerate computations has been hindered by an advanced C-like programming model. Numba brings all the benefits of accelerated computing to standard Python.

From number crunching to Deep Learning pre- and postprocessing, GPUs can complete hours-long processing in mere minutes. This course will teach everyone how to accelerate their software using the most user-friendly programming model there is.

Reach new competencies @CompetenceCentreNVIDIA Deep Learning Institute

Objectifs

This course is a one day workshop that explores how to use Numba — the just-in-time, type-specializing Python function compiler — to accelerate Python programmes to run on massively parallel NVIDIA GPUs.

You will learn how to:

  • Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs)
  • Use Numba to create and launch custom CUDA kernels
  • Apply key GPU memory management techniques

Upon completion, you will be able to use Numba to compile and launch CUDA kernels to accelerate your Python applications on NVIDIA GPUs.

Programme de la formation

This is a digital training with the following agenda:

  • 09:00-09:30
    • Handle prerequisites with participants (software, accounts, etc), explain the platform, agenda, goals, and schedule
  • 09:30-12:00
    • Module 1: Introduction to CUDA Python with Numba

12:00-13:00 Lunch

  • 13:00-14:30
    • Module 2: Custom Kernels and Memory Management for CUDA Python with Numba

14:30-15:00 Break

  • 15:00-17:00
    • Module 3: Multidimensional Grids and Shared Memory for CUDA Python with Numba
  • 17:00-17:15 Closing remarks
  • 17:15-18:00 Final assessment (Optional, can be completed at own pace)

All participants will receive access to the NVIDIA DLI cloud infrastructure as well as the official DLI training material for at least 3 months. Thus, you will be able to continue or revise the course at your own pace after the end of this workshop. Thus, anyone can attend and complete the workshop with even minimal prior experience. NVIDIA provides completion certificates after successful assessment of certain deliverables.

Accelerate with CUDA Python!

Intervenants

Dr. Georgios Varisteas

  • Research Associate at the Interdisciplinary Centre for
    Security, Reliability, and Trust (SnT), at the University of Luxembourg
  • Lead engineer of the Self-Driving Car project

In 2018 he became an NVIDIA DLI Certified Instructor. His current research interests focus on Deep Learning methods for predictive and analytical tasks.

Téléchargements

Partager cette formation sur :