top of page
Search

AI Agentic Security

  • Writer: Katarzyna  CeliÅ„ska
    Katarzyna Celińska
  • 6 days ago
  • 1 min read

Agentic AI introduces new security surfaces beyond traditional LLMs.

According to the Securing Agentic Applications Guide 1.0, created by the OWASP Gen AI Security Project, assurance strategies must go beyond static testing to address threats such as prompt injection, privilege escalation, memory poisoning, and plan manipulation.

 

Key Risks

PromptInjection Attacks → Manipulating inputs to trigger unauthorized actions or data leakage

Privilege Escalation → Abusing inter-agent relationships or external integrations to gain higher-level access

Memory Poisoning → Planting malicious content in short- or long-term memory that alters agent decision-making

 

ree

Red Teaming & Emerging Frameworks

The OWASP guide emphasizes red teaming as critical for identifying these risks:

- Red teams must simulate complex agent interactions and autonomy, not just single-model exploits.

- Effective programs combine model review, infrastructure testing, and runtime behavior analysis.

Several specialized frameworks are highlighted:

- AgentDojo → Evaluates prompt injection defenses

- Agentic Radar → Maps vulnerabilities and system functions

- AgentSafetyBench → Benchmarks safety alignment of LLM agents

- AgentFence → Simulates attack scenarios like role confusion and system instruction leakage

 

The more agentic AI is deployed, the greater the risks. These agents are still immature from a security perspective. What stands out in the OWASP guide is how red teaming, memory poisoning defense, and privilege escalation testing are new frontiers for cybersecurity teams. Security professionals must learn not only how these agents work, but also how they interact, plan, and remember.



 
 
 

Stay in touch

ITGRC ADVISORY LTD. 

590 Kingston Road, London, 

United Kingdom, SW20 8DN

​company  number: 12435469

​

Privacy policy

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
bottom of page