Agentic AI – Ally or Adversary? Navigating the New Player in Cybersecurity

Agentic AI – Ally or Adversary? Navigating the New Player in Cybersecurity

Agentic artificial intelligence – AI systems capable of setting goals, devising strategies and executing actions without constant human oversight – is no longer science fiction. As generative AI exploded into the mainstream and cybersecurity platforms began to sprinkle “AI‑powered” on every dashboard, a more advanced class of agents quietly emerged. These agentic AI systems are designed to think and act autonomously. In cybersecurity they promise to supercharge threat detection, triage endless alerts and even respond to attacks in real time, but they also raise uncomfortable questions about safety, trust and control.

Agentic AI

We researched the latest news and expert commentary to understand where agentic AI sits in the cyber arms race. Our goal is to help security leaders cut through the hype and decide whether these self‑directed systems are a friend or foe. Below you’ll find data from early pilots, viewpoints from practitioners, evidence of proof‑of‑concept attacks and guidance on how to use agentic AI responsibly.

What Makes Agentic AI Different?

Agentic AI differs from both traditional AI and generative AI because it autonomously pursues goals rather than just producing outputs or running single tasks. According to CrowdStrike, agentic AI learns from past interactions and decides the best course of action without human intervention. These agents use reinforcement learning, long‑term memory and multi‑agent orchestration so they can reason across complex workflows and even trigger automated responses. This autonomy leads to both opportunities and risks: on one hand, the agent can continuously monitor systems, correlate signals and act faster than human analysts; on the other, that same autonomy can be abused or misdirected.

Adoption Is Rising but Still Nascent

There is a gap between hype and adoption. A survey of security professionals in March 2025 found that 59 % were working on agentic AI projects but none had fully deployed them, while 24 % said they had no plans and 17 % were unsure. Experts interviewed by CyberSecurity Tribe expressed both optimism and caution. Herman Brown, CIO for the City and County of San Francisco, noted that cost and the need for “canned” solutions were obstacles. Jason Elrod, CISO at PlainsCapital Bank, believes agentic AI will become essential for threat detection and predictive analytics. Dr. Vivian Lyon emphasised that while these systems can deliver efficiency gains, they also create new vulnerabilities requiring resilient frameworks and strong safeguards.

The limited adoption does not mean a lack of value. Case studies show promising results when these systems are implemented thoughtfully. A digital insurance company that integrated AI agents to triage alerts reduced false positives and improved investigation reports. The University of Kansas Health System saw a 98 % improvement in visibility and more than doubled its detection coverage within six months; most alerts were resolved automatically so analysts could focus on critical tasks. Similarly, APi Group used an agentic AI platform to cut response times by 52 % and increase visibility by 47%. These successes underscore the potential of agentic AI to augment human teams and tame the flood of security telemetry.

Friend: Efficiency, Visibility and Proactive Defence

Agentic AI can act as a potent ally when deployed responsibly. Early adopters report significant gains in productivity and risk reduction. Agents can automatically sift through millions of alerts, correlate disparate signals and enrich them with context from threat intelligence. In the University of Kansas case, AI not only triaged alerts but also ran queries across logs and EDR systems, closing simple incidents and surfacing only the most complex ones for human analysts. This level of automation translates into lower mean‑time‑to‑detect and mean‑time‑to‑respond, which is critical in defending against fast‑moving threats.

Autonomous agents also excel at context‑aware decision making. Cyber Defense Magazine notes that agentic AI combines situational awareness with dynamic decision‑making to detect attacks, enact containment measures and adapt based on feedback. This means the system can isolate infected hosts, push firewall rules or update IAM policies in response to real‑time events without waiting for manual approval. Such capabilities can be invaluable in containing ransomware or stopping the lateral movement of an intruder.

Beyond immediate incident response, agentic AI offers continuous exposure management. Offensive security platforms like Hadrian use agents trained on thousands of hacking challenges to mimic human attackers, continuously probing and prioritising vulnerabilities. This proactive approach reduces alert fatigue by verifying that a vulnerability is exploitable before sending it to remediation and producing reproduction steps. When integrated into a continuous threat exposure management (CTEM) program, agentic AI can help organisations stay ahead of adversaries and reduce the window of exposure.

Foe: Proof‑of‑Concept Attacks and Emerging Risks

Where there is innovation, attackers follow. Security researchers have already demonstrated proof‑of‑concept (PoC) attacks that weaponise agentic AI. Trend Micro’s Pandora project shows how adversaries can embed hidden instructions into web pages and documents, causing an agent to leak confidential information or execute unsafe database queries despite guardrails. Attackers exploited weaknesses in vector databases and manipulated memory stores to gain unauthorised access or exfiltrate sensitive data. These experiments reveal that agentic AI is vulnerable to prompt injections, memory poisoning and tool misuse – threats that traditional security controls may not anticipate.

Industry predictions point to an AI‑vs‑AI arms race. WatchGuard’s field CTO outlines a three‑stage evolution: today’s systems keep humans in the loop; within one to two years attackers may run autonomous campaigns at machine speed; within five years, adversarial agents will adapt and pivot in response to defences. The Prey Project warns that criminals are already weaponising agentic AI with 80 % success in proof‑of‑concept data exfiltration attacks using invisible prompt injection. As agentic AI gains capabilities, defenders must anticipate that attackers will use similar technology to conduct reconnaissance, craft exploit chains and avoid detection.

Best Practices for Responsible Adoption

We cannot afford to dismiss agentic AI outright. Instead, security leaders should adopt it with caution and robust governance. Based on the combined guidance from Balbix, SecureFlag and the Prey Project, we recommend the following measures:

  1. Threat modelling and risk gating: Incorporate threat modelling early and use frameworks like NIST’s AI Risk Management Framework to assess an agent’s potential impact. Start with low‑risk pilots and gradually expand to sensitive systems.
  2. Sandboxing and privilege separation: Run agents in isolated environments, restrict their permissions and enforce least‑privilege access. Treat agents as privileged identities and apply the same controls you would for high‑level service accounts.
  3. Prompt hygiene and input validation: Protect against prompt injection by sanitising all inputs and filtering responses. Agents should validate external data before acting and include human review for high‑impact decisions.
  4. Auditability and logging: Maintain detailed logs of agent actions and enable version control for prompts and policies. Human operators should be able to trace every decision an agent made and roll back changes if necessary.
  5. Human‑in‑the‑loop oversight: Avoid fully autonomous operation. Best practices emphasise keeping humans involved, especially in critical or irreversible actions. Over‑automation without explainability can lead to unexpected behaviour and compliance violations.
  6. Red‑teaming and continuous testing: Regularly conduct adversarial testing to identify vulnerabilities in an agent’s reasoning. Trend Micro’s research shows that prompt injection and memory poisoning can bypass guardrails; red teaming uncovers these issues before attackers do.

Our Perspective – A Balanced Ally

At Karacena, we see agentic AI as both a powerful ally and a technology that demands respect. Our experience deploying AI in high‑assurance environments has taught us that security is about balance. We embrace innovative tools that enhance visibility and speed but never at the expense of control or ethics. By combining strong governance, rigorous testing and continuous human oversight, we believe organisations can harness agentic AI to gain an advantage over adversaries.

Our commitment to thought leadership extends beyond research. We actively participate in industry working groups, contribute to open‑source projects and help shape standards for secure AI adoption. We are ready to guide your organisation through the complexities of agentic AI – from pilot design and threat modelling to implementation and ongoing monitoring.

Future Outlook and Call to Action

Agentic AI is evolving fast. In the near term, expect to see broader adoption of autonomous triage and response, as well as criminals employing their own agents for reconnaissance and exploitation. Over the next five years the cyber battlefield will become a contest of intelligent agents; the organisations that thrive will be those that anticipate adversarial AI and invest in resilient, adaptive defenses. The decisions you make today about agentic AI will set the foundation for your security posture in this new era.

Now is the time to act. Contact our team to discuss how we can help you evaluate, deploy and govern agentic AI safely. Subscribe to our newsletter for ongoing research updates. Together we can harness this new technology as an ally rather than fear it as a dangerous adversary.

References​​

Paweł

Cybersecurity professional with many years of experience in Incident Response, Threat Hunting, and Threat Intelligence. Started his career as a SOC Analyst in the banking sector, building a strong foundation in security monitoring and incident detection. Later, he worked for large organizations as an Incident Responder, handling complex security incidents and leading advanced threat-hunting operations across hybrid environments. He specializes in analyzing adversary tactics, techniques, and procedures (TTPs), correlating diverse telemetry sources, and leveraging Threat Intelligence to enhance organizational resilience. Outside of work, he experiments with OSINT, secret discovery in open sources, and the use of artificial intelligence for threat analysis. Holds industry certifications including GPEN, CompTIA CySA+, and specialized credentials in honeypot development and analysis.

Our knowledge, your security – a shield in the digital reality.

karacena.eu
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.