LLMs in Healthcare: Powerful Tools, Not Clinician Replacements
- Ozzie Paez
- 8 hours ago
- 2 min read
If you use large language models (LLMs) within their proper limits, they can be remarkably empowering tools. But to use them effectively and safely, you must understand what they are not and cannot do, and what they can convincingly fake.

I start my training presentations with Gemini’s critically important caution about itself and LLMs in general:
“It's important to remember that while LLMs can generate text that appears human-like and even creative, they are fundamentally statistical models that predict the most probable sequence of tokens [words, portions of words, punctuation] based on their training data. They do not possess genuine comprehension or understanding in the way humans do.”
Let that sink in. LLMs are powerful tools that can help doctors improve their practices and deliver more compelling patient value— but they are not sentient, have no understanding of their own outputs, and certainly do not feel empathy, compassion, and other human emotions.
Now that you know these limitations, share your thinking by answering three questions and leaving your insights in the comments:
❓ Three Questions:
In what ways, if any, are large language models meaningfully equivalent to physicians — or any other human being?
Is it reasonable and safe for clinicians and providers to use LLM outputs as the equivalent of clinician decisions and advice?
Is it ethical for clinicians and providers to exploit LLMs’ ability to project simulated emotions through their outputs, including affection, empathy, sympathy, joy, and concerns, to influence their patients’ feelings and decision-making?
My take on LLMs in healthcare?
LLMs are AI technologies that are here to stay because, despite known problems, their ability to engage using common language and respond to almost any query is uniquely valuable. Doctors have the education, expertise, and real intelligence to use these remarkable technologies effectively and safely, provided they are properly trained.
So, if you are a doctor or provider who wants to use LLMs safely and responsibly to deliver more compelling patient value, then I can help you. You will learn to integrate these tools into your business and care delivery models using proven strategies for safe and effective use. We will not, however, promote capabilities they do not have and anthropomorphize them into something they are not. You and your patients deserve better.
Still curious? Then let’s have a conversation to discuss your goals and concerns, and explore how these remarkable technologies can innovate and transform your practice. Reach out directly: ozzie@oprhealth.com
Comments