NVIDIA DLI Workshop: Fundamentals of Accelerated Computing with CUDA C/C++, English
Friday, October 18, 2024
9:00 am - 5:00 pm
Open to current UH students, faculty and staff. CougarNet login required to register.
This workshop teaches the fundamental tools for accelerating C/C++ applications to run on massively parallel GPUs with CUDA®. You’ll learn how to write code, configure code parallelization with CUDA, optimize memory migration between the CPU and GPU accelerator and implement the workflow that you’ve learned on a new task— accelerating a fully functional, but CPU-only, particle simulator for observable massive performance gains. At the end of the workshop, you’ll have access to additional resources to create new GPU-accelerated applications on your own.
Learning
Objectives:
At
the
conclusion
of
the
workshop,
you’ll
have
an
understanding
of
the
fundamental
tools
and
techniques
for
GPU-accelerating
C/C++
applications
with
CUDA
and
be
able
to:
- Write code to be executed by a GPU accelerator.
- Expose and express data and instruction-level parallelism in C/C++ applications using CUDA.
- Utilize CUDA-managed memory and optimize memory migration using asynchronous prefetching.
- Leverage command line and visual profilers to guide your work.
- Utilize concurrent streams for instruction-level parallelism.
- Write GPU-accelerated CUDA C/C++ applications, or refactor existing CPU-only applications, using a profile-driven approach.
Price:
Free for UH affiliates (otherwise $500)
Languages:
English, Chinese, Japanese
Prerequisites:
Basic
C/C++
competency,
including
familiarity
with
variable
types,
loops,
conditional
statements,
functions,
and
array
manipulations.
No
previous
knowledge
of
CUDA
programming
is
assumed.
Tools, Libraries and Frameworks:
nvprof, nvpp
Assessment Type:
Code-based
Certificate:
Upon
successful
completion
of
the
assessment,
participants
will
receive
an
NVIDIA
DLI
certificate
to
recognize
their
subject
matter
competency
and
support
professional
career
growth.

- Location
- Online
- Cost
- Free