Jakub Urban
Jakub currently leads the data science platform team that enables the Flyr Hospitality science organisation developing, operating and maintaining data science products in a user friendly and sustainable way. He started tinkering with Python for computer simulations and data analysis during his computation physics PhD studies, when NumPy and Matplotlib were brand new projects and Pandas had not met Python yet. Since then, Python and its ecosystem have become Jakub’s de facto work and hobby toolset for anything programming and data modelling related. After leading the theory group at the tokamak department of the Institute of Plasma Physics in Prague, Jakub was in different roles in the data science and engineering landscape. He also co-founded PyData Prague meetup, performs an occasional speaker or tutor at meetups or conferences and tutors scientific computing with Python at the Czech Technical University.
Sessions
Learn how to write blazingly fast and modern numerical code in Python by leveraging just-in-time (JIT) compilation for CPU’s and GPU’s and how to scale computations across multiple machines. Tailored for data scientists, researchers, and Python enthusiasts, this session will focus on practical applications of JAX, Numba, and Ray to optimise and parallelise your numerical code. These techniques have applications across multiple fields like machine learning, numerical simulations, or engineering applications. You can already forget low level languages like C/C++, Fortran or similar.
What You’ll Learn
- Optimising and Accelerating Numerical Code with JAX and Numba:
- Converting NumPy code to JAX/Numba.
- JIT compilation for speeding up Python functions.
- Automatic differentiation (JAX).
- GPU/CPU acceleration and vectorisation.
- Main limitations of JAX and Numba.
- Hands-on exercises combining JAX and Numba to optimise loops, linear algebra, and scientific calculations.
- Scaling Workflows with Ray:
- Introduction to Ray: Parallel, out-of core and distributed computing made simple.
- Managing distributed tasks, scaling workloads across CPUs/GPUs, and handling large datasets.
- Integrating JAX and Numba in Ray-powered workflows for end-to-end acceleration.
Who Should Attend?
Participants should have intermediate Python programming skills and basic familiarity with NumPy and linear algebra. No prior experience with JAX, Numba, or Ray is necessary.
Structure
- Hour 1: Foundations of JAX and Numba – JIT compilation, GPU acceleration, and their respective strengths.
- Hour 2: Hands-on optimisation: Writing high-performance numerical code with JAX and Numba.
- Hour 3: Scaling computations with Ray: Parallelising JAX/Numba workflows across machines.
Takeaways
- Understand when and how to use JAX and Numba for accelerating numerical Python code.
- Write scalable, high-performance workflows using Ray.
- Leave with ready-to-use examples and insights to apply in their own projects.
Learn how to write blazingly fast and modern numerical code in Python by leveraging just-in-time (JIT) compilation for CPU’s and GPU’s and how to scale computations across multiple machines. Tailored for data scientists, researchers, and Python enthusiasts, this session will focus on practical applications of JAX, Numba, and Ray to optimise and parallelise your numerical code. These techniques have applications across multiple fields like machine learning, numerical simulations, or engineering applications. You can already forget low level languages like C/C++, Fortran or similar.
What You’ll Learn
- Optimising and Accelerating Numerical Code with JAX and Numba:
- Converting NumPy code to JAX/Numba.
- JIT compilation for speeding up Python functions.
- Automatic differentiation (JAX).
- GPU/CPU acceleration and vectorisation.
- Main limitations of JAX and Numba.
- Hands-on exercises combining JAX and Numba to optimise loops, linear algebra, and scientific calculations.
- Scaling Workflows with Ray:
- Introduction to Ray: Parallel, out-of core and distributed computing made simple.
- Managing distributed tasks, scaling workloads across CPUs/GPUs, and handling large datasets.
- Integrating JAX and Numba in Ray-powered workflows for end-to-end acceleration.
Who Should Attend?
Participants should have intermediate Python programming skills and basic familiarity with NumPy and linear algebra. No prior experience with JAX, Numba, or Ray is necessary.
Structure
- Hour 1: Foundations of JAX and Numba – JIT compilation, GPU acceleration, and their respective strengths.
- Hour 2: Hands-on optimisation: Writing high-performance numerical code with JAX and Numba.
- Hour 3: Scaling computations with Ray: Parallelising JAX/Numba workflows across machines.
Takeaways
- Understand when and how to use JAX and Numba for accelerating numerical Python code.
- Write scalable, high-performance workflows using Ray.
- Leave with ready-to-use examples and insights to apply in their own projects.