AMD GPU Programming Training Course
ROCm is an open-source platform for GPU programming that supports AMD GPUs, and also provides compatibility with CUDA and OpenCL. ROCm exposes the programmer to the hardware details and gives full control over the parallelization process. However, this also requires a good understanding of the device architecture, memory model, execution model, and optimization techniques.
HIP is a C++ runtime API and kernel language that allows you to write portable code that can run on both AMD and NVIDIA GPUs. HIP provides a thin abstraction layer over the native GPU APIs, such as ROCm and CUDA, and allows you to leverage the existing GPU libraries and tools.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use ROCm and HIP to program AMD GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
- Set up a development environment that includes ROCm Platform, an AMD GPU, and Visual Studio Code.
- Create a basic ROCm program that performs vector addition on the GPU and retrieves the results from the GPU memory.
- Use ROCm API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
- Use HIP language to write kernels that execute on the GPU and manipulate data.
- Use HIP built-in functions, variables, and libraries to perform common tasks and operations.
- Use ROCm and HIP memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
- Use ROCm and HIP execution models to control the threads, blocks, and grids that define the parallelism.
- Debug and test ROCm and HIP programs using tools such as ROCm Debugger and ROCm Profiler.
- Optimize ROCm and HIP programs using techniques such as coalescing, caching, prefetching, and profiling.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction
- What is ROCm?
- What is HIP?
- ROCm vs CUDA vs OpenCL
- Overview of ROCm and HIP features and architecture
- Setting up the Development Environment
Getting Started
- Creating a new ROCm project using Visual Studio Code
- Exploring the project structure and files
- Compiling and running the program
- Displaying the output using printf and fprintf
ROCm API
- Understanding the role of ROCm API in the host program
- Using ROCm API to query device information and capabilities
- Using ROCm API to allocate and deallocate device memory
- Using ROCm API to copy data between host and device
- Using ROCm API to launch kernels and synchronize threads
- Using ROCm API to handle errors and exceptions
HIP Language
- Understanding the role of HIP language in the device program
- Using HIP language to write kernels that execute on the GPU and manipulate data
- Using HIP data types, qualifiers, operators, and expressions
- Using HIP built-in functions, variables, and libraries to perform common tasks and operations
ROCm and HIP Memory Model
- Understanding the difference between host and device memory models
- Using ROCm and HIP memory spaces, such as global, shared, constant, and local
- Using ROCm and HIP memory objects, such as pointers, arrays, textures, and surfaces
- Using ROCm and HIP memory access modes, such as read-only, write-only, read-write, etc.
- Using ROCm and HIP memory consistency model and synchronization mechanisms
ROCm and HIP Execution Model
- Understanding the difference between host and device execution models
- Using ROCm and HIP threads, blocks, and grids to define the parallelism
- Using ROCm and HIP thread functions, such as hipThreadIdx_x, hipBlockIdx_x, hipBlockDim_x, etc.
- Using ROCm and HIP block functions, such as __syncthreads, __threadfence_block, etc.
- Using ROCm and HIP grid functions, such as hipGridDim_x, hipGridSync, cooperative groups, etc.
Debugging
- Understanding the common errors and bugs in ROCm and HIP programs
- Using Visual Studio Code debugger to inspect variables, breakpoints, call stack, etc.
- Using ROCm Debugger to debug ROCm and HIP programs on AMD devices
- Using ROCm Profiler to analyze ROCm and HIP programs on AMD devices
Optimization
- Understanding the factors that affect the performance of ROCm and HIP programs
- Using ROCm and HIP coalescing techniques to improve memory throughput
- Using ROCm and HIP caching and prefetching techniques to reduce memory latency
- Using ROCm and HIP shared memory and local memory techniques to optimize memory accesses and bandwidth
- Using ROCm and HIP profiling and profiling tools to measure and improve the execution time and resource utilization
Summary and Next Steps
Requirements
- An understanding of C/C++ language and parallel programming concepts
- Basic knowledge of computer architecture and memory hierarchy
- Experience with command-line tools and code editors
Audience
- Developers who wish to learn how to use ROCm and HIP to program AMD GPUs and exploit their parallelism
- Developers who wish to write high-performance and scalable code that can run on different AMD devices
- Programmers who wish to explore the low-level aspects of GPU programming and optimize their code performance
Open Training Courses require 5+ participants.
AMD GPU Programming Training Course - Booking
AMD GPU Programming Training Course - Enquiry
AMD GPU Programming - Consultancy Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend comprises a range of AI processors engineered for high-performance inference and training.
This instructor-led training (available online or onsite) targets intermediate-level AI engineers and data scientists looking to develop and optimize neural network models using Huawei’s Ascend platform alongside the CANN toolkit.
Upon completion of this training, participants will be capable of:
- Establishing and configuring the CANN development environment.
- Creating AI applications through MindSpore and CloudMatrix workflows.
- Enhancing performance on Ascend NPUs via tiling and custom operators.
- Deploying models into edge or cloud environments.
Course Format
- Interactive lectures and discussions.
- Practical application of Huawei Ascend and the CANN toolkit in sample scenarios.
- Guided exercises centered on model building, training, and deployment.
Customization Options
- For customized training tailored to your specific infrastructure or datasets, please contact us to make arrangements.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s AI compute stack, designed for deploying and optimizing AI models on Ascend AI processors.
This instructor-led live training (available online or onsite) targets intermediate-level AI developers and engineers seeking to efficiently deploy trained AI models to Huawei Ascend hardware. The course utilizes the CANN toolkit alongside tools such as MindSpore, TensorFlow, or PyTorch.
Upon completion of this training, participants will be able to:
- Comprehend the CANN architecture and its significance within the AI deployment pipeline.
- Convert and adapt models from popular frameworks into Ascend-compatible formats.
- Utilize tools such as ATC, OM model conversion, and MindSpore for both edge and cloud inference.
- Diagnose deployment issues and optimize performance on Ascend hardware.
Course Format
- Interactive lectures combined with live demonstrations.
- Hands-on lab exercises using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios grounded in real-world AI models.
Customization Options
- To request customized training for this course, please contact us to make arrangements.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix represents Huawei’s comprehensive unified platform for AI development and deployment, engineered to facilitate scalable, production-ready inference pipelines.
This instructor-led training, available either online or on-site, targets AI professionals at beginner to intermediate levels who aim to deploy and monitor AI models leveraging the CloudMatrix platform alongside CANN and MindSpore integration.
Upon completion of this training, participants will be capable of:
- Utilizing CloudMatrix for model packaging, deployment, and serving.
- Converting and optimizing models specifically for Ascend chipsets.
- Establishing pipelines for both real-time and batch inference tasks.
- Monitoring deployments and optimizing performance within production environments.
Course Format
- Interactive lectures accompanied by discussions.
- Practical application of CloudMatrix through real-world deployment scenarios.
- Guided exercises emphasizing conversion, optimization, and scaling.
Course Customization Options
- For customized training tailored to your specific AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs engineered for AI and HPC workloads, supporting large-scale training and inference.
This instructor-led, live training (online or onsite) targets intermediate to advanced developers who aim to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Comprehend Biren GPU architecture and memory hierarchy.
- Configure the development environment and utilize Biren’s programming model.
- Translate and optimize CUDA-style code for Biren platforms.
- Implement performance tuning and debugging techniques.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customization Options
- To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI processors optimized for both inference and training tasks in edge computing and data center environments.
This instructor-led live training, available online or onsite, is designed for intermediate-level developers who want to build and deploy AI models utilizing the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
Upon completion of this training, participants will be able to:
- Set up and configure development environments for BANGPy and Neuware.
- Develop and optimize models written in Python and C++ for Cambricon MLUs.
- Deploy models to edge and data center devices operating on the Neuware runtime.
- Integrate machine learning workflows with MLU-specific acceleration capabilities.
Course Format
- Interactive lectures and discussions.
- Practical application of BANGPy and Neuware for development and deployment.
- Guided exercises focusing on optimization, integration, and testing.
Customization Options
- For customized training tailored to your specific Cambricon device model or use case, please contact us to arrange.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) serves as Huawei’s specialized toolkit for AI computing, enabling the compilation, optimization, and deployment of AI models on Ascend AI processors.
This instructor-led live training session, available in both online and onsite formats, is designed for beginner-level AI developers. The course helps participants grasp how CANN integrates into the model lifecycle—from training through to deployment—and demonstrates its interoperability with popular frameworks such as MindSpore, TensorFlow, and PyTorch.
Upon completing this training, participants will be able to:
- Comprehend the purpose and underlying architecture of the CANN toolkit.
- Configure a development environment utilizing CANN alongside MindSpore.
- Convert and deploy a basic AI model onto Ascend hardware.
- Acquire foundational knowledge to support future CANN optimization or integration initiatives.
Course Format
- Engaging lectures combined with interactive discussions.
- Practical hands-on labs focused on simple model deployment.
- Step-by-step guidance through the CANN toolchain and its integration points.
Customization Options
- For inquiries regarding customized training for this course, please reach out to us to arrange details.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit facilitates powerful AI inference on edge devices, including the Ascend 310. It provides crucial tools for compiling, optimizing, and deploying models in environments with limited compute and memory resources.
This instructor-led live training (available online or onsite) targets intermediate-level AI developers and integrators who want to deploy and optimize models on Ascend edge devices using the CANN toolchain.
Upon completion of this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Construct lightweight inference pipelines utilizing MindSpore Lite and AscendCL.
- Enhance model performance in resource-constrained compute and memory settings.
- Deploy and monitor AI applications in real-world edge scenarios.
Course Format
- Interactive lectures and demonstrations.
- Hands-on laboratory exercises focused on edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Customization Options
- To request a customized training version of this course, please contact us to arrange details.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack — spanning from the low-level CANN SDK to the high-level MindSpore framework — delivers a tightly integrated AI development and deployment environment optimized for Ascend hardware.
This instructor-led, live training (available online or on-site) is designed for beginner-to-intermediate level technical professionals seeking to understand how the CANN and MindSpore components collaborate to support AI lifecycle management and infrastructure decisions.
Upon completing this training, participants will be able to:
- Comprehend the layered architecture of Huawei’s AI computing stack.
- Recognize how CANN facilitates model optimization and hardware-level deployment.
- Assess the MindSpore framework and toolchain in comparison to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-premises environments.
Course Format
- Interactive lectures and discussions.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs on the model flow from MindSpore to CANN.
Course Customization Options
- To request customized training for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursCANN SDK (Compute Architecture for Neural Networks) serves as Huawei’s foundational AI compute platform, empowering developers to fine-tune and maximize the performance of neural networks deployed on Ascend AI processors.
This instructor-led live training session, available both online and onsite, is designed for advanced-level AI developers and system engineers eager to enhance inference performance utilizing CANN’s sophisticated toolset. Key areas include the Graph Engine, TIK, and custom operator development.
Upon completion of this training, participants will be capable of:
- Gaining a comprehensive understanding of CANN's runtime architecture and performance lifecycle.
- Employing profiling tools and the Graph Engine to conduct performance analysis and optimization.
- Developing and optimizing custom operators using TIK and TVM.
- Identifying memory bottlenecks and enhancing model throughput.
Course Format
- Interactive lectures accompanied by discussions.
- Practical labs featuring real-time profiling and operator tuning.
- Optimization exercises utilizing deployment examples for edge cases.
Course Customization Options
- To request tailored training for this course, please contact us to arrange a session.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) delivers robust deployment and optimization capabilities for real-time AI applications in computer vision and natural language processing, particularly when leveraging Huawei Ascend hardware.
This instructor-led training, available online or onsite, is designed for intermediate-level AI professionals seeking to build, deploy, and optimize vision and language models using the CANN SDK for production environments.
Upon completion of this training, participants will be able to:
- Deploy and optimize CV and NLP models using CANN and AscendCL.
- Utilize CANN tools to convert models and integrate them into live pipelines.
- Enhance inference performance for tasks such as detection, classification, and sentiment analysis.
- Construct real-time CV/NLP pipelines suitable for both edge and cloud deployment scenarios.
Course Format
- Interactive lectures combined with practical demonstrations.
- Hands-on labs focusing on model deployment and performance profiling.
- Live pipeline design exercises using real-world CV and NLP use cases.
Course Customization Options
- For customized training arrangements, please contact us to discuss your specific needs.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM facilitate advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led live training (available online or onsite) is designed for advanced system developers who aim to build, deploy, and tune custom operators for AI models utilizing CANN’s TIK programming model and TVM compiler integration.
Upon completion of this training, participants will be equipped to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom operators into the CANN runtime and execution graph.
- Leverage TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for specific custom computation patterns.
Course Format
- Interactive lectures and demonstrations.
- Practical coding exercises for operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Customization Options
- To request customized training for this course, please contact us to arrange.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for the local AI and HPC markets.
This instructor-led live training (available online or onsite) targets advanced GPU programmers and infrastructure specialists seeking to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
Upon completion of this training, participants will be able to:
- Assess the compatibility of current CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance metrics and identify optimization opportunities across different platforms.
- Address practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Hands-on labs focused on code translation and performance comparison.
- Guided exercises emphasizing multi-GPU adaptation strategies.
Customization Options
- For customized training tailored to your specific platform or CUDA project, please contact us to arrange a session.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon stand out as premier AI hardware platforms in China, providing distinct acceleration and profiling capabilities tailored for large-scale AI operations.
This instructor-led live training, available both online and onsite, is designed for advanced AI infrastructure and performance engineers seeking to enhance model inference and training workflows across these prominent Chinese AI chip ecosystems.
Upon completion of this training, participants will be equipped to:
- Evaluate model performance on Ascend, Biren, and Cambricon platforms.
- Pinpoint system bottlenecks and identify inefficiencies in memory and compute resources.
- Implement optimizations at the graph, kernel, and operator levels.
- Refine deployment pipelines to enhance throughput and reduce latency.
Course Format
- Interactive lectures and discussions.
- Practical application of profiling and optimization tools on each respective platform.
- Guided exercises targeting real-world tuning scenarios.
Customization Options
- For customized training tailored to your specific performance environment or model type, please contact us to arrange.