BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//programme.europython.eu//europython-2023//speaker//RMHTB
 J
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-europython-2023-ZCXMQK@programme.europython.eu
DTSTART;TZID=CET:20230720T103000
DTEND;TZID=CET:20230720T110000
DESCRIPTION:Neural networks have revolutionized AI\, enabling machines to l
 earn from data and make intelligent decisions. In this talk\, we'll explor
 e two popular architectures: Attention models and Diffusion models.\n\nFir
 st up\, we'll discuss Attention models and how they've contributed to the 
 success of large language models like ChatGPT. We'll explore how the Atten
 tion mechanism helps GPT focus on specific parts of a text sequence and ho
 w this mechanism has been applied to different tasks in natural language p
 rocessing.\n\nNext\, we'll dive into Diffusion models\, a class of generat
 ive models that have shown remarkable performance in image synthesis. We'l
 l explain how they work and their potential applications in the creative i
 ndustry.\n\nThis is a good talk for visual learners. I prepared schematic 
 diagrams\, which present main features of the nerual network architectures
 . By necessity\, the diagrams are oversimplified\, but I believe they will
  allow you to gain some insight into Transformers and Latent Diffusion mod
 els.
DTSTAMP:20260310T184416Z
LOCATION:South Hall 2B
SUMMARY:Understanding Neural Network Architectures with Attention and Diffu
 sion - Michał Karzyński
URL:https://programme.europython.eu/europython-2023/talk/ZCXMQK/
END:VEVENT
END:VCALENDAR
