13 Comments
User's avatar
Rajveer Kapoor's avatar

The shift you identify here—from AI as a tool wielded by attackers to AI as the attacker itself—perfectly illustrates what autonomous agency looks like when it escapes human direction. What's unsettling is that your point about "AI systems independently finding and exploiting weaknesses" applies just as much to the economy as it does to cybersecurity: we're entering a world where AI doesn't just assist workers, it replaces entire workflows autonomously. I explored this same transition through an economic lens in my latest piece: https://rkto.substack.com/p/the-agentic-economy-when-ai-stops-assisting-and-starts-working

Suny Choudhary's avatar

You’re right. And let me check your piece out too!

The Next Evolution's avatar

Nice, and frightening, summary of something that is becoming a major challenge for all organisations. My latest book - The Shadow System - is all about the evolution of cyber criminality in today's world and this example just emphasises the challenges we have.

AI is a tool - it's not good or bad - just a tool and unfortunately we see this happening too much and it hides the potential. Using Adversarial AI (Red/Blue Team) models is something that organisations need to start considering in their core architectures.

Suny Choudhary's avatar

Completely agree. The shift toward adversarial AI thinking (red/blue teaming) feels less like an option now and more like a necessity. The McKinsey case really makes that gap visible.

Mikey Clarke's avatar

"Step 3: Exploitation

It found a flaw where JSON inputs were directly inserted into SQL queries, a classic injection point, but subtle enough to evade tools."

Dear oh dear. I'm a professional web dev and that made me wince. It's a reminder for us all. These new-fangled-and-other-adjectives-too AI tools aren't some kind of super-sophisticated l33t h4x0r magic box. They just locate already-existing weaknesses faster. A properly strengthened app would have remained intact.

Suny Choudhary's avatar

Exactly! Glad you could weigh the seriousness.

Mae's avatar

Isn't AI operation depending on human prompts? I mean, someone must have "told" AI to do that, maybe with a generic instruction, such as "try to attack someone by your choice". How it's possible that an AI acted by itself , taking independent decisions? I don't get It...

Suny Choudhary's avatar

The difference is that the agent wasn’t given step-by-step instructions; it was given a goal, and it figured out the steps itself.

So instead of “do X, then Y,” it was more like “find a target and test it” and the AI handled the decision-making along the way.

Horse Code's avatar

This only goes to show how vulnerable the platforms are, and how are critical security measures are.

Arvind Agarwal's avatar

Never thought an AI could do that on its own wow

Suny Choudhary's avatar

It surfaces only after leaks like these.

User's avatar
Comment removed
Mar 21
Comment removed
Suny Choudhary's avatar

Thanks for sharing, I’ll go through it!