6 Comments
User's avatar
D. Reinheardt's avatar

Banning AI doesn’t remove the behavior, it just makes it harder to see.

Suny Choudhary's avatar

Exactly! The phrase ‘axing your own feet’ goes well in this case.

Basil Wong's avatar

AI Toolkit: could probably add CloudCruise to the list as well

Suny Choudhary's avatar

Thank you for mentioning, I’ll look into it!

User's avatar
Comment deleted
Mar 15Edited
Comment deleted
Suny Choudhary's avatar

Good points. Policies and on-prem models definitely help, especially for highly controlled environments. The tricky part is behavior. Even with those in place, clinicians often still reach for external tools when they’re faster. That’s why I think governance at the workflow level matters too, so data stays protected regardless of which model someone ends up using.

User's avatar
Comment deleted
Mar 16Edited
Comment deleted
Suny Choudhary's avatar

You’re absolutely right that training, risk frameworks, and least-acceptable-risk thinking are key parts of it. The challenge I keep seeing in healthcare is that behavior often outruns policy, when clinicians are under time pressure, they default to whatever tool is fastest. That’s why I think combining training with technical guardrails (like workflow-level controls) is where things start to actually work in practice.