EuroPython 2025

GIL-free Python and the GPU: hands-on experience

  • 2025-07-14 , Club A
  • 2025-07-14 , Club A

All times in Europe/Prague

Because of the Global Interpreter Lock (GIL), Python has never truly been parallel. Even on multi-core systems, Python threads are forced to take turns rather than running simultaneously, limiting performance in compute-heavy applications. The recent removal of the GIL is unlocking new levels of concurrency and efficiency, redefining what’s possible with Python in high-performance computing.

In this hands-on tutorial, we will demystify parallel programming in Python by showcasing how to tackle common concurrency challenges. Starting from the ground up, we will introduce the two common parallel-programming approaches in Python—multithreading and multiprocessing—ensuring that attendees of all experience levels can successfully participate in the tutorial.

From there, we will dive into real-life use cases and demonstrate how to leverage free-threaded Python to tap into the power of GPUs. By pairing Python’s parallel libraries with CUDA, you will learn how to accelerate both typical computing tasks and more advanced work, such as deep learning. We will also explore the best tools available for debugging, monitoring, and optimizing multi-threaded and GPU-accelerated applications, all while highlighting proven best practices.

Throughout the tutorial, you will have the chance to work through exercises—from simple parallel calls to complex GPU integrations—so make sure to bring your laptop. Ideally, your laptop should have a GPU; if not, we will show you how to use one available online.

By the end, you will walk away with not only a solid understanding of GIL-free Python but also the confidence to implement, debug, and optimize parallel solutions in your own projects.


Expected audience expertise:

Intermediate

During his work at NVIDIA, Michał gained vast experience in Deep Learning Software Development. He tackled challenges in training and inference, ranging from small-scale to large-scale applications, as well as user-facing tasks and highly-optimized benchmarks like MLPerf. Michał also possesses a deep understanding of data loading problems, having worked as a developer on NVIDIA DALI, the Data Loading Library.

Rostan Tabet is a Software Engineer at NVIDIA. He is working on using free-threaded Python in the context of the NVIDIA deep learning libraries suite.