Why blocking official AI tools pushes clinicians toward shadow workflows, and why browser-level governance is becoming healthcare’s real AI security layer.
Good points. Policies and on-prem models definitely help, especially for highly controlled environments. The tricky part is behavior. Even with those in place, clinicians often still reach for external tools when they’re faster. That’s why I think governance at the workflow level matters too, so data stays protected regardless of which model someone ends up using.
It’s a training and risk issue at the organizational level. If employees are going to use external tools with patients sensitive medical data, they need to follow a framework to protect inputting of certain said sensitive medical data. That would usually would be initiated after a risk analysis assessment was done, and gaps discovered such again the loss of sensitive data as you are suggesting. If you again look back this years to last year you will find a few security breaches of commercial LLMs out there and leaked information by different malicious actors. The only other thing would be to ban mobile devices at the work site and internet access to those commercial LLMs, but that’s still not effective after staff have left the site for today. It may come down to what is the least acceptable risk to the organization.
You’re absolutely right that training, risk frameworks, and least-acceptable-risk thinking are key parts of it. The challenge I keep seeing in healthcare is that behavior often outruns policy, when clinicians are under time pressure, they default to whatever tool is fastest. That’s why I think combining training with technical guardrails (like workflow-level controls) is where things start to actually work in practice.
Banning AI doesn’t remove the behavior, it just makes it harder to see.
Exactly! The phrase ‘axing your own feet’ goes well in this case.
Regarding what you wrote:
1) Policy change / Acceptable Use Policy (AUP) against using AI / LLM off the enterprise network.
2) Running LLM on the enterprise securely with storage encryption. On prem solution.
Good points. Policies and on-prem models definitely help, especially for highly controlled environments. The tricky part is behavior. Even with those in place, clinicians often still reach for external tools when they’re faster. That’s why I think governance at the workflow level matters too, so data stays protected regardless of which model someone ends up using.
It’s a training and risk issue at the organizational level. If employees are going to use external tools with patients sensitive medical data, they need to follow a framework to protect inputting of certain said sensitive medical data. That would usually would be initiated after a risk analysis assessment was done, and gaps discovered such again the loss of sensitive data as you are suggesting. If you again look back this years to last year you will find a few security breaches of commercial LLMs out there and leaked information by different malicious actors. The only other thing would be to ban mobile devices at the work site and internet access to those commercial LLMs, but that’s still not effective after staff have left the site for today. It may come down to what is the least acceptable risk to the organization.
You’re absolutely right that training, risk frameworks, and least-acceptable-risk thinking are key parts of it. The challenge I keep seeing in healthcare is that behavior often outruns policy, when clinicians are under time pressure, they default to whatever tool is fastest. That’s why I think combining training with technical guardrails (like workflow-level controls) is where things start to actually work in practice.
AI Toolkit: could probably add CloudCruise to the list as well
Thank you for mentioning, I’ll look into it!