In the context of Deep Learning, the ICT Cluster organises workshops in collaboration with NVIDIA Deep Learning Institute. A first one was organized at the end of 2019: Fundamentals of Deep Learning for Computer Vision
This year a second workshop pushes you to take a step further:
GPUs have made a revolution in computational performance recently. However, using a GPU to accelerate computations has been hindered by an advanced C-like programming model. Numba brings all the benefits of accelerated computing to standard Python.
From number crunching to Deep Learning pre- and postprocessing, GPUs can complete hours-long processing in mere minutes. This course will teach everyone how to accelerate their software using the most user-friendly programming model there is.
This course is a one day workshop that explores how to use Numba — the just-in-time, type-specializing Python function compiler — to accelerate Python programmes to run on massively parallel NVIDIA GPUs.
You will learn how to:
- Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs)
- Use Numba to create and launch custom CUDA kernels
- Apply key GPU memory management techniques
Upon completion, you will be able to use Numba to compile and launch CUDA kernels to accelerate your Python applications on NVIDIA GPUs.
This is a digital training with the following agenda:
All participants will receive access to the NVIDIA DLI cloud infrastructure as well as the official DLI training material for at least 3 months. Thus, you will be able to continue or revise the course at your own pace after the end of this workshop. Thus, anyone can attend and complete the workshop with even minimal prior experience. NVIDIA provides completion certificates after successful assessment of certain deliverables.
In 2018 he became an NVIDIA DLI Certified Instructor. His current research interests focus on Deep Learning methods for predictive and analytical tasks.
Partager cette formation sur :