8 Comments
User's avatar
D. Reinheardt's avatar

Banning AI doesn’t remove the behavior, it just makes it harder to see.

Suny Choudhary's avatar

Exactly! The phrase ‘axing your own feet’ goes well in this case.

Tom Chaydun Music's avatar

Regarding what you wrote:

1) Policy change / Acceptable Use Policy (AUP) against using AI / LLM off the enterprise network.

2) Running LLM on the enterprise securely with storage encryption. On prem solution.

Suny Choudhary's avatar

Good points. Policies and on-prem models definitely help, especially for highly controlled environments. The tricky part is behavior. Even with those in place, clinicians often still reach for external tools when they’re faster. That’s why I think governance at the workflow level matters too, so data stays protected regardless of which model someone ends up using.

Tom Chaydun Music's avatar

It’s a training and risk issue at the organizational level. If employees are going to use external tools with patients sensitive medical data, they need to follow a framework to protect inputting of certain said sensitive medical data. That would usually would be initiated after a risk analysis assessment was done, and gaps discovered such again the loss of sensitive data as you are suggesting. If you again look back this years to last year you will find a few security breaches of commercial LLMs out there and leaked information by different malicious actors. The only other thing would be to ban mobile devices at the work site and internet access to those commercial LLMs, but that’s still not effective after staff have left the site for today. It may come down to what is the least acceptable risk to the organization.

Suny Choudhary's avatar

You’re absolutely right that training, risk frameworks, and least-acceptable-risk thinking are key parts of it. The challenge I keep seeing in healthcare is that behavior often outruns policy, when clinicians are under time pressure, they default to whatever tool is fastest. That’s why I think combining training with technical guardrails (like workflow-level controls) is where things start to actually work in practice.

Basil Wong's avatar

AI Toolkit: could probably add CloudCruise to the list as well

Suny Choudhary's avatar

Thank you for mentioning, I’ll look into it!