AI Security Speakers

Photo of author

Written by: Henry Dalziel

Last updated on April 18, 2026

Leading AI Security Speakers of 2026

In this resource we present who we consider to be the leading AI Security speakers of 2026. We list a ton of information of cyber security conferences and we post a lot of cutting-edge AI webinars.

* If you’d like to add your name or company to this list – get in contact!

Core AI Safety & Alignment

This group includes researchers and thinkers focused on the fundamental question of how advanced AI systems should behave as they scale. Their talks tend to cover alignment, interpretability, model objectives, and long-term safety risks—topics that influence how “secure-by-design” AI is framed across industry and policy.

For AI Security audiences, these speakers help connect technical failure modes to real-world impact, shaping how teams think about safety requirements, evaluation, and assurance long before deployment decisions are made.

Core AI Safety & Alignment

Paul Christiano – Alignment Research Center / Former OpenAI – Dwarkesh Podcast Live Events; AI Alignment Public Talks

Sheila McIlraith – University of Toronto / Vector Institute – AAAI 2023; IJCAI 2016

Jeff Clune – University of British Columbia / Former OpenAI – NeurIPS Workshops; AI Safety & Interpretability Workshops

David Duvenaud – University of Toronto / Vector Institute – NeurIPS Workshops; ICLR Workshops

Gillian Hadfield – University of Toronto / Vector Institute – Stanford AI100 Meetings; AI Governance Conferences

Roger Grosse – University of Toronto – NeurIPS Workshops; ICLR Workshops

Nicolas Papernot – Google Brain / Vector Institute – NeurIPS 2022 Workshops; DPFM@ICLR 2024

AI Security, Robustness, and Privacy

These speakers sit closer to the engineering edge of AI risk: robustness under attack, privacy leakage, adversarial manipulation, and how real systems fail in production. Their sessions often translate abstract safety concerns into concrete controls—testing, hardening, monitoring, and governance—across models and pipelines.

If you’re building or buying AI security tools, this is the category that most directly informs practical defenses, from threat modeling and red teaming to privacy-preserving design and resilient deployment patterns.

AI Security, Robustness & Privacy

Luka Ivezic – Information Security Forum (ISF) – ISF Secure & Trusted AI Events

Marin Ivezic – Applied Quantum AI & Critical Infrastructure – Security Conferences

Sheila McIlraith – University of Toronto / Vector Institute – ICAPS 2023; AAAI 2023

Jeff Clune – University of British Columbia – NeurIPS; AI Safety Workshops

David Duvenaud – University of Toronto / Vector Institute – UK–Canada Frontiers of Science: AI

Nicolas Papernot – Google Brain / Vector Institute – IEEE SaTML; UK–Canada Frontiers of Science: AI

AI Governance and Systemic Safety

AI governance speakers focus on how organisations and governments can manage AI risk at scale—through policy, standards, accountability, and oversight. Expect themes like compliance, auditability, model assurance, procurement controls, and systemic safety frameworks that influence how AI is approved and operated.

For security leaders, these talks are useful because they bridge the gap between technical risk and organisational decision-making, helping teams define responsibilities, control objectives, and measurable requirements for safe AI adoption.

AI Governance & Systemic Safety

Gillian Hadfield – University of Toronto / Vector Institute – AI100 Governance Events; AI Policy Workshops

David Duvenaud – University of Toronto – AI Frontiers of Science; AGI Governance Meetings

Sheila McIlraith – University of Toronto – KR 2014; AAAI 2023

Jeff Clune – University of British Columbia – Frontier AI Policy Panels; Safety Workshops

Nicolas Papernot – Google Brain / Vector Institute – IEEE Secure & Trustworthy ML (SaTML)

AI-Powered Cybersecurity Leaders

This table highlights executives and operators driving the commercial side of AI-driven defense—how AI is being used to improve detection, response, exposure management, and security operations. Their talks are typically grounded in outcomes: faster triage, better signal-to-noise, improved resilience, and practical deployment lessons in large environments.

For event audiences, this category offers a view into where budgets are moving, what buyers are prioritising, and how vendors are positioning AI capabilities across cloud, endpoint, and SOC workflows.

AI-Powered Cybersecurity Leaders

Beenu Arora – Cyble – NASSCOM US CEO Forum 2025

John D. Loveland – StrikeReady – AI-Driven SOC & Cyber Operations Conferences

Yevgeny Dibrov – Armis – OT & AI-Powered Cyber Exposure Conferences

Matthew Prince – Cloudflare – Cloudflare Connect; Global Security Conferences

Jay Chaudhry – Zscaler – Zenith Live 2021; Zenith Live 2025

Allie Mellen – Forrester Research – RSA Conference; Black Hat; HOPE

Cybersecurity & AI Safety Ecosystem Shapers

These ecosystem shapers sit at the intersection of security practice, community education, and forward-looking risk narratives. These conference speakers often speak on how architectures are evolving, how automation changes operations, and how organisations should adapt programs for modern threats—including AI-enabled abuse.

Their value is breadth: they connect trends across vendors, practitioner communities, and emerging research, making them ideal for conferences that want practical guidance with strategic perspective. Expect insights on culture, governance, and scalable security design.

Cybersecurity & AI Safety Ecosystem Shapers

Andy Ellis – YL Ventures / Former Akamai – RSA Conference; Black Hat

Allison Miller – Independent Security Researcher – Fraud & Abuse Prevention Conferences

Bob Rudis – Rapid7 – Data-Driven Security & SOC Analytics Events

Chris Wysopal – Veracode – RSA Conference; OWASP Events

Jim Tiller – Cynomi – CISO & Cybersecurity Strategy Conferences

Toryn Klassen – Vector Institute – AI Safety Reading Group Events

Michael Zhang – Vector Institute – AI Trust & Safety Workshops

Wrapping Up

The landscape of AI safety and security spans a remarkable breadth of expertise—from foundational alignment research to hands-on defensive engineering, governance frameworks, and commercial innovation. What connects these speakers is a shared recognition that securing AI systems requires collaboration across disciplines.

Alignment researchers help us understand why models might fail; robustness experts show us how to harden them; governance leaders define what controls organizations need; and industry practitioners demonstrate where these concepts translate into operational reality.

For security professionals planning events or building programs, this diversity isn’t a complication—it’s an asset. The most effective AI security strategies draw from all these perspectives, bridging theoretical risk with practical defense and organizational accountability.