NVIDIA GPU Programming - Extended Training Course
This instructor-led, live training course provides comprehensive guidance on programming GPUs for parallel computing. Participants will learn how to utilize various platforms, work with the CUDA platform and its features, and apply diverse optimization techniques using CUDA. Key application areas include deep learning, analytics, image processing, and engineering solutions.
This course is available as onsite live training in Sweden or online live training.Course Outline
Introduction
Understanding the Fundamentals of Heterogeneous Computing Methodology
Why Parallel Computing? Understanding the Need for Parallel Computing
Multi-Core Processors - Architecture and Design
Introduction to Threads, Thread Basics and Basic Concepts of Parallel Programming
Understanding the Fundamentals of GPU Software Optimization Processes
OpenMP - A Standard for Directive-Based Parallel Programming
Hands on / Demonstration of Various Programs on Multicore Machines
Introduction to GPU Computing
GPUs for Parallel Computing
GPUs Programming Model
Hands on / Demonstration of Various Programs on GPU
SDK, Toolkit and Installation of Environment for GPU
Working with Various Libraries
Demonstration of GPU and Tools with Sample Programs and OpenACC
Understanding the CUDA Programming Model
Learning the CUDA Architecture
Exploring and Setting Up the CUDA Development Environments
Working with the CUDA Runtime API
Understanding the CUDA Memory Model
Exploring Additional CUDA API Features
Accessing Global Memory Efficiently in CUDA: Global Memory Optimization
Optimizing Data Transfers in CUDA Using CUDA Streams
Using Shared Memory in CUDA
Understanding and Using Atomic Operations and Instructions in CUDA
Case Study: Basic Digital Image Processing with CUDA
Working with Multi-GPU Programming
Advanced Hardware Profiling and Sampling on NVIDIA / CUDA
Using CUDA Dynamic Parallelism API for Dynamic Kernel Launch
Summary and Conclusion
Requirements
- C Programming
- Linux GCC
Open Training Courses require 5+ participants.
NVIDIA GPU Programming - Extended Training Course - Booking
NVIDIA GPU Programming - Extended Training Course - Enquiry
NVIDIA GPU Programming - Extended - Consultancy Enquiry
Testimonials (1)
Trainers energy and humor.
Tadeusz Kaluba - Nokia Solutions and Networks Sp. z o.o.
Course - NVIDIA GPU Programming - Extended
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend comprises a range of AI processors engineered for high-performance inference and training.
This instructor-led training (available online or onsite) targets intermediate-level AI engineers and data scientists looking to develop and optimize neural network models using Huawei’s Ascend platform alongside the CANN toolkit.
Upon completion of this training, participants will be capable of:
- Establishing and configuring the CANN development environment.
- Creating AI applications through MindSpore and CloudMatrix workflows.
- Enhancing performance on Ascend NPUs via tiling and custom operators.
- Deploying models into edge or cloud environments.
Course Format
- Interactive lectures and discussions.
- Practical application of Huawei Ascend and the CANN toolkit in sample scenarios.
- Guided exercises centered on model building, training, and deployment.
Customization Options
- For customized training tailored to your specific infrastructure or datasets, please contact us to make arrangements.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s AI compute stack, designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led live training (available online or onsite) targets intermediate-level AI developers and engineers seeking to efficiently deploy trained AI models to Huawei Ascend hardware. The course utilizes the CANN toolkit alongside tools such as MindSpore, TensorFlow, or PyTorch.
Upon completion of this training, participants will be able to:
- Comprehend the CANN architecture and its significance within the AI deployment pipeline.
- Convert and adapt models from popular frameworks into Ascend-compatible formats.
- Utilize tools such as ATC, OM model conversion, and MindSpore for both edge and cloud inference.
- Diagnose deployment issues and optimize performance on Ascend hardware.
Course Format
- Interactive lectures combined with live demonstrations.
- Hands-on lab exercises using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios grounded in real-world AI models.
Customization Options
- To request customized training for this course, please contact us to make arrangements.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix represents Huawei’s comprehensive unified platform for AI development and deployment, engineered to facilitate scalable, production-ready inference pipelines.
This instructor-led training, available either online or on-site, targets AI professionals at beginner to intermediate levels who aim to deploy and monitor AI models leveraging the CloudMatrix platform alongside CANN and MindSpore integration.
Upon completion of this training, participants will be capable of:
- Utilizing CloudMatrix for model packaging, deployment, and serving.
- Converting and optimizing models specifically for Ascend chipsets.
- Establishing pipelines for both real-time and batch inference tasks.
- Monitoring deployments and optimizing performance within production environments.
Course Format
- Interactive lectures accompanied by discussions.
- Practical application of CloudMatrix through real-world deployment scenarios.
- Guided exercises emphasizing conversion, optimization, and scaling.
Course Customization Options
- For customized training tailored to your specific AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs engineered for AI and HPC workloads, supporting large-scale training and inference.
This instructor-led, live training (online or onsite) targets intermediate to advanced developers who aim to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Comprehend Biren GPU architecture and memory hierarchy.
- Configure the development environment and utilize Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Implement performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI processors optimized for both inference and training tasks in edge computing and data center environments.
This instructor-led live training, available online or onsite, is designed for intermediate-level developers who want to build and deploy AI models utilizing the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completion of this training, participants will be able to:
- Set up and configure development environments for BANGPy and Neuware.
- Develop and optimize models written in Python and C++ for Cambricon MLUs.
- Deploy models to edge and data center devices operating on the Neuware runtime.
- Integrate machine learning workflows with MLU-specific acceleration capabilities.
Course Format
- Interactive lectures and discussions.
- Practical application of BANGPy and Neuware for development and deployment.
- Guided exercises focusing on optimization, integration, and testing.
Customization Options
- For customized training tailored to your specific Cambricon device model or use case, please contact us to arrange.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s specialized toolkit for AI computing, enabling the compilation, optimization, and deployment of AI models on Ascend AI processors.
This instructor-led live training session, available in both online and onsite formats, is designed for beginner-level AI developers. The course helps participants grasp how CANN integrates into the model lifecycle—from training through to deployment—and demonstrates its interoperability with popular frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completing this training, participants will be able to:
- Comprehend the purpose and underlying architecture of the CANN toolkit.
- Configure a development environment utilizing CANN alongside MindSpore.
- Convert and deploy a basic AI model onto Ascend hardware.
- Acquire foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
- Engaging lectures combined with interactive discussions.
- Practical hands-on labs focused on simple model deployment.
- Step-by-step guidance through the CANN toolchain and its integration points.
Customization Options
- For inquiries regarding customized training for this course, please reach out to us to arrange details.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit facilitates powerful AI inference on edge devices, including the Ascend 310. It provides crucial tools for compiling, optimizing, and deploying models in environments with limited compute and memory resources.
This instructor-led live training (available online or onsite) targets intermediate-level AI developers and integrators who want to deploy and optimize models on Ascend edge devices using the CANN toolchain.
Upon completion of this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Construct lightweight inference pipelines utilizing MindSpore Lite and AscendCL.
- Enhance model performance in resource-constrained compute and memory settings.
- Deploy and monitor AI applications in real-world edge scenarios.
Course Format
- Interactive lectures and demonstrations.
- Hands-on laboratory exercises focused on edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Customization Options
- To request a customized training version of this course, please contact us to arrange details.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack — spanning from the low-level CANN SDK to the high-level MindSpore framework — delivers a tightly integrated AI development and deployment environment optimized for Ascend hardware.
This instructor-led, live training (available online or on-site) is designed for beginner-to-intermediate level technical professionals seeking to understand how the CANN and MindSpore components collaborate to support AI lifecycle management and infrastructure decisions.
Upon completing this training, participants will be able to:
- Comprehend the layered architecture of Huawei’s AI computing stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and toolchain in comparison to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-premises environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs on the model flow from MindSpore to CANN.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursCANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, empowering developers to fine-tune and maximize the performance of neural networks deployed on Ascend AI processors.
This instructor-led live training session, available both online and onsite, is designed for advanced-level AI developers and system engineers eager to enhance inference performance utilizing CANN’s sophisticated toolset. Key areas include the Graph Engine, TIK, and custom operator development.
Upon completion of this training, participants will be capable of:
- Gaining a comprehensive understanding of CANN's runtime architecture and performance lifecycle.
- Employing profiling tools and the Graph Engine to conduct performance analysis and optimization.
- Developing and optimizing custom operators using TIK and TVM.
- Identifying memory bottlenecks and enhancing model throughput.
Course Format
- Interactive lectures accompanied by discussions.
- Practical labs featuring real-time profiling and operator tuning.
- Optimization exercises utilizing deployment examples for edge cases.
Course Customization Options
- To request tailored training for this course, please contact us to arrange a session.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) delivers robust deployment and optimization capabilities for real-time AI applications in computer vision and natural language processing, particularly when leveraging Huawei Ascend hardware.
This instructor-led training, available online or onsite, is designed for intermediate-level AI professionals seeking to build, deploy, and optimize vision and language models using the CANN SDK for production environments.
Upon completion of this training, participants will be able to:
- Deploy and optimize CV and NLP models using CANN and AscendCL.
- Utilize CANN tools to convert models and integrate them into live pipelines.
- Enhance inference performance for tasks such as detection, classification, and sentiment analysis.
- Construct real-time CV/NLP pipelines suitable for both edge and cloud deployment scenarios.
Course Format
- Interactive lectures combined with practical demonstrations.
- Hands-on labs focusing on model deployment and performance profiling.
- Live pipeline design exercises using real-world CV and NLP use cases.
Course Customization Options
- For customized training arrangements, please contact us to discuss your specific needs.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM facilitate advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led live training (available online or onsite) is designed for advanced system developers who aim to build, deploy, and tune custom operators for AI models utilizing CANN’s TIK programming model and TVM compiler integration.
Upon completion of this training, participants will be equipped to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom operators into the CANN runtime and execution graph.
- Leverage TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for specific custom computation patterns.
Course Format
- Interactive lectures and demonstrations.
- Practical coding exercises for operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Customization Options
- To request customized training for this course, please contact us to arrange.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for the local AI and HPC markets.
This instructor-led live training (available online or onsite) targets advanced GPU programmers and infrastructure specialists seeking to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
Upon completion of this training, participants will be able to:
- Assess the compatibility of current CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance metrics and identify optimization opportunities across different platforms.
- Address practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Hands-on labs focused on code translation and performance comparison.
- Guided exercises emphasizing multi-GPU adaptation strategies.
Customization Options
- For customized training tailored to your specific platform or CUDA project, please contact us to arrange a session.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon stand out as premier AI hardware platforms in China, providing distinct acceleration and profiling capabilities tailored for large-scale AI operations.
This instructor-led live training, available both online and onsite, is designed for advanced AI infrastructure and performance engineers seeking to enhance model inference and training workflows across these prominent Chinese AI chip ecosystems.
Upon completion of this training, participants will be equipped to:
- Evaluate model performance on Ascend, Biren, and Cambricon platforms.
- Pinpoint system bottlenecks and identify inefficiencies in memory and compute resources.
- Implement optimizations at the graph, kernel, and operator levels.
- Refine deployment pipelines to enhance throughput and reduce latency.
Course Format
- Interactive lectures and discussions.
- Practical application of profiling and optimization tools on each respective platform.
- Guided exercises targeting real-world tuning scenarios.
Customization Options
- For customized training tailored to your specific performance environment or model type, please contact us to arrange.