Mission briefing: F5 AI AppSec hero challenge
About the Security Event
The integration of large language models into modern applications introduces new security risks that traditional defenses are not designed to address. This interactive session explores how AI driven systems can be exploited through techniques such as prompt injection, data extraction, and model abuse. Participants will gain hands on experience in a controlled lab environment, simulating real world attacks against AI applications.
During the live lab, attendees will test and defend against common AI threats while observing how advanced security controls detect and mitigate malicious inputs in real time. The session demonstrates how vulnerabilities can be exploited and how modern application security approaches can protect AI systems without disrupting user experience. This event is designed for security engineers, developers, and DevOps practitioners seeking practical exposure to AI security challenges.