The AI Attack Surface

What Every Security Team Should Be Testing

Introduction

I recently had the pleasure of presenting at IT Summit 2025 | SynerComm, where I discussed the AI attack surface and what every security team should be testing (slides). The talk initially focused on the title topic, but as I delved deeper, it naturally transitioned into what I am using AI for and what excites me as a penetration tester. This blog post is a summary of my talk, and in a follow-up blog, I'll share more about my AI usage and excitement.

The AI Attack Surface: What Every Security Team Should Be Testing

In today's rapidly evolving technological landscape, AI has become an integral part of enterprise software. However, with its widespread adoption comes a new set of security challenges that every security team must address. In this blog post, we'll explore the AI attack surface and discuss what every security team should be testing to ensure robust AI security.

Understanding the AI Attack Surface

The AI attack surface encompasses various components, including chatbots, AI wrappers, and enterprise software integrations. Continuous testing is crucial to keep pace with the evolving threats. Traditional point-in-time reviews are no longer sufficient; instead, testing should be built into your CI/CD pipeline to ensure ongoing security.

Important Memes

Claude —dangerously-skip-permissions (yolo mode)

A sticker from DEF CON 33 that I wish I had

Common AI Vulnerabilities and Testing Methodologies

AI systems are susceptible to several vulnerabilities, including direct and indirect prompt injections, over-permissive agents, and attack chaining. Traditional scanners often miss these logic-layer vulnerabilities, making it essential to employ both in-house testing and third-party assessments.

Practical AI Security Testing

To effectively test AI systems, security teams should focus on the following areas:

  • Prompt Injection: Hidden instructions in user input or documents can lead to unintended actions.

  • Over-Permissive Functions: AI agents with excessive permissions can execute destructive commands.

  • Insecure Output Handling: Unsafe use of AI output in other systems can result in security breaches.

  • Supply Chain Vulnerabilities: Poisoned data sources or dependencies can compromise AI models.

AI-Specific Threat Models and Frameworks

Frameworks like MITRE ATLAS and the OWASP GenAI Red Team Guide provide structured approaches to evaluating AI vulnerabilities. These frameworks help defenders build AI-specific threat models and align testing with known adversarial behaviors.

Continuous AI Security Testing

Continuous testing is essential to keep up with the dynamic nature of AI threats. Tools like Burp AI, nVidia’s Garak, and Meta’s PurpleLlama offer automated scanning and adversarial testing capabilities. However, manual and continuous red teaming remains irreplaceable for comprehensive security.

Conclusion

AI is transforming the way we interact with technology, but it also introduces new security risks. By integrating continuous AI security testing into your CI/CD pipeline and leveraging both in-house and third-party assessments, you can safeguard your AI systems against emerging threats. Start small by testing your chatbots, RAG apps, and agents now, and build a robust AI-specific security program to stay ahead of the curve.

Hack the planet chatbots!