Frustrations Shared By The Cyber Security Community
The FIVE Major Concerns Are:
1. AI Introduces New and Poorly Understood Vulnerabilities
2. Lack of Mature Security Tools and Standards for AI Systems
3. The AI Gold Rush Is Diverting Resources From Foundational Security
4. Explosion of Complexity in Identity and Access Management
5. Exploitation Windows Are Shrinking to Alarming Speed
Over the past year, conversations about AI security have shifted from curiosity to concern.
To understand where frustration is building, we scraped and reviewed discussions from the Reddit cybersecurity community—threads where practitioners speak candidly, without marketing filters or vendor framing.
What emerges is a consistent pattern: most frustrations are not about using AI to improve security operations, but about securing AI itself.
This distinction matters.
“AI for security” is largely seen as a productivity boost.
“Security for AI” is where confidence drops, risk increases, and frustration grows.
Below are the five themes that surfaced most clearly, explained through the lens of practitioners who are already dealing with the consequences.
1. New AI Vulnerabilities Are Poorly Understood
A dominant concern is that AI systems introduce entirely new classes of vulnerabilities that security teams don’t yet understand well enough to defend.
Prompt injection, indirect prompt manipulation, insecure output handling, and model behavior abuse are frequently cited examples.
What’s striking is not just the novelty of these issues, but how shallow many current mitigations appear. Community members often describe defenses as little more than “regex filters,” offering surface-level protection that feels brittle and easily bypassed.
There’s a strong sense that AI systems are being deployed faster than threat models can keep up, and that security teams are expected to protect systems whose failure modes are still being discovered in real time.
This creates discomfort for experienced professionals who are used to well-defined attack surfaces.
With AI, the boundaries feel fluid, language-driven, and socially engineerable in ways that traditional application security was never designed to handle.
2. AI Security Tools and Standards Are Immature
Closely related is frustration with the immaturity of the AI security ecosystem itself. While frameworks like the OWASP LLM Top 10 and NIST’s AI Risk Management Framework are acknowledged as useful starting points, practitioners don’t see tooling that reliably operationalizes these ideas.
Many tools are perceived as reactive, experimental, or simply rebranded analytics products marketed as “AI security.” Detecting prompt-based attacks, enforcing agent permissions, monitoring inference risk, or validating model behavior often feels manual and ad hoc.
Security leaders are being asked to take responsibility for AI risk without the kind of hardened, battle-tested platforms they rely on elsewhere.
This gap between frameworks and execution creates fatigue.
Teams understand what needs to be done in theory, but lack confidence in how to do it at scale.
3. AI Hype Is Pulling Focus From Core Security
Another recurring frustration is organizational rather than technical. Many practitioners describe AI as a “budget black hole” that pulls attention, funding, and executive focus away from foundational security work.
In the rush to deploy AI-driven features, organizations often underinvest in governance, inventory, access control, and monitoring for the AI systems themselves.
Security teams are left reacting after the fact, expected to secure systems that were never designed with protection in mind.
This creates resentment, particularly among experienced professionals who recognize the pattern. Innovation is rewarded.
Risk management is deferred. When something goes wrong, security is blamed for not moving fast enough—despite having been sidelined during design and deployment.
4. Identity and Access Complexity Is Exploding
AI agents, copilots, and autonomous workflows are triggering what many describe as an “insane explosion” in identity complexity.
Non-human identities are multiplying rapidly, and with them, permissions that are often broad, persistent, and poorly audited.
From a security perspective, this is a classic IAM problem—but at a scale and speed most organizations are unprepared for.
Agents frequently inherit human-level access without equivalent controls, reviews, or lifecycle management. Over-permissioning becomes the default, not the exception.
Practitioners are frustrated because the risks are obvious, yet difficult to control.
The same IAM disciplines that apply to humans—least privilege, separation of duties, continuous review—are rarely enforced rigorously for agents. This creates silent exposure that grows over time.
5. Attacks Are Happening Faster Than Response
Perhaps the most unsettling theme is how fast AI-related vulnerabilities are exploited once disclosed. Community members increasingly describe timelines measured in hours or days, not weeks.
Seeing exploits appear in under 48 hours is now considered “reasonable.”
This pace outstrips traditional patch cycles, approval processes, and change management workflows. It forces security teams into a reactive posture they were never designed for. Being “fast enough” now means constant external monitoring, rapid mitigation, and proactive hardening—capabilities many organizations simply don’t have.
The result is burnout and anxiety.
Teams know the clock is ticking, but lack the authority, tooling, or staffing to respond at the required speed.
A Question Back to the Community
Taken together, these frustrations point to a deeper issue. Traditional security principles still apply, but they are struggling to adapt to the speed, architecture, and attack surfaces introduced by AI.
The gap between AI innovation and AI security is widening, and practitioners feel it daily.
So the real question is this: do these frustrations resonate with your experience?
Are these the right five—or are there others the community should be talking about more openly?
As AI becomes embedded in core business systems, these conversations are no longer theoretical. They’re shaping how secure—or exposed—the next generation of technology will be.
Further Reading
If you want to go deeper into how the industry is defining and responding to these challenges, the AI Security hub provides a broad view of how this category is forming.
For a grounding in fundamentals, start with what is AI Security, which clearly separates AI as a security tool from the security of AI systems themselves.
To understand who is shaping the market, the AI vendor landscape offers a practical overview of how vendors are positioning their solutions.
On the technical side, one of the most cited pain points in the community is AI prompt injection, which highlights why traditional application security controls often fall short when applied to LLMs.
As teams look to standardize defenses, emerging LLM security platforms and early discussions around AI Security trends of 2025 help illustrate where tooling and practices are heading.
Finally, several of the frustrations discussed above intersect with organizational behavior rather than pure technology.
Resources on Shadow AI explore how unsanctioned usage amplifies risk, while the List of AI thought leaders speakers highlights the researchers, practitioners, and operators shaping how AI security is evolving across industry, academia, and policy.