Your Team is the Glitch in the System
The tools are easy to deploy. Changing how people actually use them? That’s where the wheels fall off.
TL;DR
The Literacy Gap: Deploying an agent is a “One-Click” event; training a human to oversee it is a six-month journey.
Shadow AI 2.0: Employees aren’t just using unapproved chatbots anymore; they are connecting unapproved agents to company data to “fix” their own workflows.
The Trust Paradox: Teams either trust the AI too little (resistance) or too much (rubber-stamping), both of which break traditional governance.
Behavioral Drift: Just like models drift, human habits drift. If a policy is too friction-heavy, people will silently bypass it until it becomes a cultural norm.
The Friction vs. Compliance War
Most AI governance is designed by people who love spreadsheets and hate risk. They build “perfect” frameworks with ten layers of approval. But in the real world, your marketing manager has a deadline in two hours. If your “governed” AI tool requires a three-day audit for every prompt, that manager is going to use a personal, ungoverned tool in a private browser tab.
Governance fails when the cost of compliance is higher than the perceived risk of a shortcut. In 2026, we are seeing a massive spike in “Shadow Agentic Workflows”, where employees use personal automation platforms to link corporate emails to external LLMs because the official company tool is “too clunky.” You can’t govern what people are doing in the dark.
The “Rubber-Stamp” Reflex
We keep talking about “Human-in-the-Loop,” but we forget about human nature. When a person is asked to review 100 AI-generated reports a day, and the first 99 are perfect, they stop reading. By the time the 100th report contains a catastrophic error, the human is on autopilot.
This is “Automation Bias.” Our brains are wired to find the path of least resistance. If the AI is usually right, we stop checking. This means your “human oversight” layer is often just a ritual rather than a real safety check. Modern governance needs to account for the fact that humans are bored, tired, and easily impressed.
Culture Over Code
You can write the most sophisticated “Kill Switch” in the world, but if your corporate culture rewards “speed at all costs,” someone will eventually disable that switch. The real challenge of 2026 is moving from Enforcement (making people follow rules) to Literacy (making people understand the why behind the rules). If your team doesn’t understand why a prompt injection is dangerous, they will keep trying to “trick” the AI into giving them better results, unwittingly opening the door to a breach.
My Perspective
We focus on securing the technical loop, but we’re the first to admit: technology is a secondary problem. The primary problem is that we are giving “God-like” tools to people with “Intern-level” AI literacy.
Stop trying to fix human behavior with 50-page PDFs. Start building “Invisible Governance.” The best policy is one where the safe path is also the easiest path. If you make it harder for an employee to do the wrong thing than the right thing, you don’t need to “enforce” anything; the system does it for you.
AI Toolkit
Glean: An enterprise search and AI assistant that keeps governance “invisible” by strictly respecting existing user permissions across all company apps.
Arthur: Provides a “Model Monitoring” layer that helps teams visualize where human-AI collaboration is breaking down in real-time.
Tines: A workflow automation platform that lets you build “Human-in-the-Loop” steps that are actually fast enough for people to use.
Credo AI: Focuses on “Responsible AI” by aligning technical guardrails with actual human organizational goals.
Vanta: Automates the boring parts of compliance so employees don’t feel the need to take shortcuts around security rules.
Prompt of the Day
Role: You are a “Behavioral Security Auditor.”
Context: You’ve discovered that the Finance team has been using an “unapproved” AI agent to summarize invoices because the official tool was “too slow.” The unapproved tool is sending sensitive data to a third-party server.
Task: Design a “Nudge” instead of a “Hammer.”
Requirements:
Propose one “Friction Reduction” change to the official tool that would make the Finance team want to switch back.
Draft a 2-sentence Slack message to the team that explains the risk without sounding like a “compliance lecture.”
Create a “Safety Reward” system: how do you incentivize teams to report “Shadow AI” use instead of hiding it?


