Laura Summers
Laura is a very technical designer™️, working at Pydantic as Lead Design Engineer. Her side projects include Sweet Summer Child Score (summerchild.dev) and Ethics Litmus Tests (ethical-litmus.site). Laura is passionate about feminism, digital rights and designing for privacy. She speaks, writes and runs workshops at the intersection of design and technology.
Sessions
In this one-hour panel session, AI experts will explore the rapidly evolving impact of these technologies on our society. The discussion will address pressing questions across key domains being transformed by AI: privacy and regulation, environmental implications, economic and labor shifts, and the impact of AI on media and art.
Beyond hearing insightful perspectives from our panelists, you'll have the opportunity to submit your own questions throughout the session. You’ll also have the chance to walk away with prizes, through answering quiz questions based on topics discussed in the session. Whether you're an AI enthusiast, industry professional, or simply curious about how these technologies will shape our world, join us to find out more about this complex area and have your burning questions answered.
AI development is still software development, just with some uniquely frustrating twists. You don't need to throw out everything you know about engineering, but you do need better patterns for building, debugging, and actually seeing what your AI is doing.
In this talk, we'll walk through Pydantic's opinionated approach to building reliable AI applications in Python, combining practical engineering patterns with what we call "human-seeded evals", a way to create meaningful evaluation systems that start small and scale up (without needing a PhD in data annotation).
We'll show you how to build AI apps with Pydantic AI that don't fall apart in production, create evaluation systems that start with just 5-10 examples and grow from there, and use Pydantic Logfire to actually understand what's happening under the hood. Live demos, real code, and patterns you can use tomorrow.
Community Notes - if you haven’t heard of this algorithm yet, you’ll come across it in 2025. Originated by the team at then-Twitter and developed open source, this year Meta will be adopting it across all platforms (and spinning down all other content moderation—but that’s a separate topic). Let’s focus on what’s interesting about Community Notes: a fascinating blend of algorithm and design choices that puts platform users in the driver’s seat. In their own words, “Community Notes aims to create a better informed world, by empowering people on X to add helpful notes to posts that might be misleading.” Importantly, also, “X doesn’t write, rate or moderate notes (unless they break X's Rules.)” This hands-off approach is particularly noteworthy in an era when model guardrails and constraints are increasingly part of the public consciousness, and not in a good way. People are bumping their heads as they sail into the wall at the edge of their AI-managed world, like Truman discovering the limits of his manufactured reality.
For anyone building an agent or thinking about dialogue-as-interface—this is the talk for you. Let's drift through this data science success story, examining how careful algorithm design combined with thoughtful UX can create something remarkable: a system that finds truth not through central authority, but through bridging the multiplicities of human perspective. We'll dive deep into both the mathematical engine ranking these bridges of agreement, and the subtle design choices that make users want to cross them.