Inside Black Hat Asia 2026
Black Hat Asia 2026 has officially redefined the frontline.
TL;DR
The Rise of the Agentic Attacker: Former OpenAI red team lead Ari Herbert-Voss warns that “point-in-time” security testing is now officially obsolete.
Autonomous Offensive AI: New research shows systems that operate continuously at scale, moving from simple chat exploits to full-chain automated hacking.
The “vet” Guardrail: An open-source tool debuted at Arsenal that acts as a “conversational security guard” for AI-generated code.
Deterministic Defense: Tines CEO Eoin Hinchy presented a framework for scaling AI workflows without losing auditability or human oversight.
Regional Surge: The Asia-Pacific region is seeing a massive spike in security investment as threat actors weaponize GenAI for cloud and supply chain attacks.
The Death of the Annual Audit
At Black Hat Asia 2026, the message from the keynote stage was clear: the era of the “yearly penetration test” is over. Ari Herbert-Voss revealed the three-year evolution of Agentic Offensive Security, autonomous systems that don’t sleep and don’t need human prompts to move laterally through a network. These agents are designed to probe, exploit, and persist at a scale that human security teams simply cannot match.
This shift means that security is moving from a “check the box” activity to a continuous “System Layer” battle. If the attackers are using agents that can think and adapt in real-time, our defenses must be equally dynamic. The research presented at the Marina Bay Sands shows that offensive AI is now being used to enhance attacks across cloud infrastructure and supply chains, making the perimeter more porous than ever.
Frameworks Over Features
One of the most critical sessions, led by Tines CEO Eoin Hinchy, addressed the “Implementation Gap.” Many companies are rushing to add AI to their workflows but are introducing massive vulnerabilities in the process. The proposed solution isn’t to remove AI, but to surround it with “Deterministic Automation.”
This approach combines human expertise with rigid, predictable automation and flexible AI. By building a secure framework for these “Intelligent Workflows,” organizations can scale their operations without sacrificing the auditability that regulators require. It is a move toward “Control Layers”, where the AI is a component of a larger, safer system rather than a standalone black box with high-level permissions.
Developer-First Defense
The event also saw the debut of “vet,” an open-source tool that represents the future of the AI Software Development Life Cycle (SDLC). Instead of being a separate security scan that happens after the code is written, “vet” acts as a conversational guardrail that integrates directly with AI coding tools.
It provides real-time analysis as the code is being generated, helping developers identify supply chain risks before they are even committed to a repository. This is “Security at the Source.” As we move into a world of “vibe coding,” tools like “vet” are becoming the essential interaction layer that prevents AI-generated errors from becoming enterprise-level breaches.
My Perspective
I’ve spent a lot of time talking about the “Interaction Layer,” and the research at Black Hat Asia 2026 confirms my biggest fear: the “Trust Gap” is widening. We are giving AI agents the keys to our systems before we have the locks to contain them. The fact that the Asia-Pacific region is seeing a surge in AI-driven supply chain attacks should be a wake-up call for every CISO.
We agree with the shift toward “Real-World Agentic Workflows.” You cannot defend against a multi-agent attack using a single-point solution. You need a system that can monitor the intent of an AI agent in real-time. If an agent starts performing “impossible” tasks or accessing data it doesn’t need for the immediate workflow, the system needs to kill that session instantly.
We are moving away from “AI for Security” and toward “Security FOR AI.” The trainings at Black Hat on AI Red Teaming show that the industry is finally waking up to the fact that LLMs and multimodal systems are a completely new attack surface. If you aren’t red-teaming your own agents today, someone else’s agent will do it for you tomorrow.
The keynote by Herbert-Voss isn’t just a prediction; it is the new baseline. In a world of autonomous hackers, your only defense is an autonomous, deterministic control layer. We have to stop treating AI security as a “plugin” and start treating it as the foundational architecture of the 2026 enterprise.
AI Toolkit
Kadoa: An AI-native web scraper that autonomously navigates complex sites to extract clean, structured data.
Dola: A conversational AI calendar assistant that syncs with WhatsApp, Telegram, and Apple Calendar via natural language.
Relume: Uses AI to build site maps and wireframes in minutes, drastically speeding up the design-to-development flow.
ChatPRD: A specialized AI for product managers that turns vague ideas into detailed, professional PRDs.
Wisecut: An AI video editor that automatically removes silences and adds subtitles for rapid content creation.
Prompt of the Day
Role: You are an AI Red Team Lead preparing for a simulation inspired by Black Hat Asia 2026.
Context: We are testing our internal “Agentic Workflow” which uses an AI agent to handle customer data migration between two cloud environments.
Task: Design an “Autonomous Offensive” test scenario.
Requirements:
Identify 3 “Lateral Movement” steps an offensive agent could take if it compromised the migration agent’s OAuth token.
Focus on the “Interaction Layer”—how would the offensive agent attempt to “deceive” the monitoring system while exfiltrating data?
Propose 2 “Deterministic Guardrails” (e.g., IP whitelisting or volume limits) that would break the attack chain even if the AI logic was bypassed.


