Solving Engineering Problems Using Multi GPU Computing
In today's landscape of high-performance computing, enterprise datacenters, and even home labs, GPUs have become indispensable for solving compute-intensive problems. However, achieving scalable performance when using multiple GPUs is far from automatic. It requires careful consideration of the problem, thoughtful decomposition strategies, use of collective operations (e.g. AllReduce, Broadcast, AllGather, Scatter), and targeted code optimization to avoid bottlenecks that can negate the benefits of additional hardware.
In this talk, I’ll explore how workflows in enterprise environments, HPC clusters, and home labs alike can unlock the true potential of multi-GPU computing. Through examples drawn from computational fluid dynamics (CFD) and AI/ML applications, I will demonstrate how using multiple GPUs can dramatically reduce time to solution and enable scalability for problems involving large, multidimensional datasets. I will also dive into science’s growing interest in GPU computing, focusing on how problem decomposition strategies can be tailored for multiple GPUs and how to identify and mitigate communication and synchronization overheads that can limit scaling.
Whether you're managing an enterprise datacenter, leading an HPC team, or optimizing workflows for your personal projects, this session will provide actionable insights and tools to help you navigate the complexities of multi-GPU computing and leverage it to accelerate your applications.