
What RSA 2025 Revealed About Agentic AI in Cybersecurity
By Yuda Saydun, Founder & CEO, CyVent
At RSA 2025, Agentic AI wasn’t the loudest topic—but it was certainly one of the most defining ones for the future.
Walking the halls of RSA 2025, I felt the tectonic plates of cybersecurity shifting subtly beneath the Moscone Center. The buzz wasn’t about whether Agentic AI would reshape defense strategies... It was about how.
What stood out weren’t the marketing slogans or booth displays. It was the hallway conversations. The off-the-record briefings. The clear-eyed questioning by bCISOs. The realization that we’re past the “is Agentic AI coming?” phase. We’re in “how fast is it already changing everything?”
And how to best prepare for it.
From Hype to Reality in 12 Months

Last year, Agentic AI was still in the buzzword phase. This year, it shows up as working code—and working consequences.
I saw agentic AI systems red-teaming other AI agents. I saw dashboards surfacing autonomous decisions with confidence scores and built-in dispute logic. In one deployment, the organization reported a 40% drop in dwell time. In another, they claimed to have reclaimed 70% of analyst time for higher-value work.
But here’s a statement that stuck with me:
“We’re not using AI to replace our team. We’re using it to finally let them do their job.”
That’s where Agentic AI is heading. Not toward full autonomy yet. Toward smarter, explainable augmentation.
3 Uncomfortable Truths for Security Leaders

1. Legacy Tools Are Insufficient
Siloed, non-explainable systems might still pass a compliance audit—but they won’t survive an Agentic AI-augmented threat actor. The future belongs to AI-native architectures with explainability bakes into every decision. If our stack isn’t integrated, transparent, and adaptable, we’re already behind.
2. Trust Is a Process, Not a Feature
Everyone talks about “trust in AI.” But real trust comes from auditability, human oversight, and clear escalation paths. If we don’t know how your AI makes decisions, neither will our boards—and that’s a liability.
3. Humans Aren’t Optional
The best SOCs aren’t about reducing headcount. They’re about optimizing it. Let the AI write the summary—but have an analyst confirm what it means. Let the model flag anomalies—but leave judgment to people who understand business context. At least for now.
The Ethical Minefield On Prudent Execs’ Minds

Offstage, the real focus wasn’t on features—it was on accountability.
What happens when a compromised AI agent triggers a breach?
Who’s liable when an autonomous system misfires and takes down a hospital network?
How do you detect prompt injection or shadow AI inside your own environment?
Gartner predicts that by 2028, 33% of cybersecurity incidents will involve an AI agent being misled or compromised. That feels low.
In one closed-door session, someone asked:
“What happens when the attacker understands your AI better than you do?”
No one had a satisfying answer. Just uncomfortable silence.
A Refreshing, Clear-Eyed Focus on Deployment
Seasoned Board Members and CISO’s are asking:
Do we understand how our AI makes decisions?
What guardrails do we have around autonomous decisions?
How do we test and validate AI outputs before they trigger actions?
Are we red-teaming our own AI tools?
Who is responsible when something goes wrong—and how quickly can we find out why?
These aren’t technical questions. They’re governance questions. And now is the time to have the right answers.
The Smart CISOs Are Pragmatic, Not Confused.

The leaders I trust most weren’t chasing hype. They were:
Running structured GenAI pilots
Testing before deploying
Stress-testing agents in red-team exercises
Asking vendors hard questions about data lineage and model transparency
One phrase that really stuck with me:
“Make small bets—but build big guardrails.”
That’s the posture to lead with. Clarity over chaos. Curiosity over fear.
Final Takeaways: Shape or Be Shaped

Agentic AI isn’t science fiction. It’s happening right now—inside SOCs, inside dashboards, and increasingly, inside boardrooms.
For those of us in leadership roles, here’s what matters most:
Agentic AI is here—and accelerating. It won’t replace our teams, but it will redefine how they work.
CISOs who prioritize governance, not just adoption, will win. The best leaders don’t just greenlight tools—they build accountability into every layer.
Boards need to get smarter, faster. Knowing what to ask might soon matter more than what we already know.
In three years, no one will ask if we used AI.
They’ll ask whether you governed it well enough.
Now is the time to get that right.
—
Need a second opinion on your AI strategy?
At CyVent, we help executive teams evaluate AI tools, build governance frameworks, and navigate the shift to agentic security systems with clarity.
Contact us today for a confidential consultation.