Is U.S. Healthcare Ready for Its AI Revolution? The Truth About AI in Medicine with Glenn Loomis
- Juan Allan
- Sep 15
- 5 min read
Glenn Loomis, MD, explores AI's real-world impact on US healthcare, from administrative gains to regulatory hurdles shaping the future of patient care

The greatest barrier to AI revolutionizing American healthcare isn't technological limitation, but a regulatory and cultural framework that treats artificial intelligence as a flawless device rather than a fallible, yet superior, partner to human clinicians.
To explore this critical tension between innovation and implementation, we are speaking with Glenn Loomis, Founder & CEO of Query Health, and a physician on the front lines of healthcare technology. With deep expertise in both clinical practice and AI integration, Loomis provides a candid and crucial assessment of where AI is truly making an impact today, the formidable challenges holding it back, and the exponential growth he predicts is just on the horizon for the healthcare sector.
Interview with Glenn Loomis
How is AI currently being used in the U.S. healthcare system to improve patient diagnosis and treatment?
AI is currently only being used in limited ways to improve diagnosis and treatment. There are numerous applications that incorporate machine learning to sift through data and alert clinicians to potential diagnoses or worsening conditions. These are typically purpose built for a single condition or a few related conditions. LLMs are being used in limited ways as well.
The biggest application is Open Evidence, which has a curated dataset that ensures use of only medically appropriate sources. However, it misses numerous important data sets, like national cancer guidelines. It is being used by a large proportion of US physicians. There are others tools built to help with diagnosis and treatment, such as Pathway, Kahun, etc. Few of these are well integrated into physician workflows.
What are the main benefits of using AI in hospitals and clinics across the United States?
Currently, the main use for AI is resolving administrative issues, such as coding, billing, insurance verification, etc. Another large and growing application is ambient scribing, which allows AI to listen to patient-provider interactions and create medical record entries based on the transcribing and summarizing the interactions.
Fewer applications are focused directly on clinical care. Most of these applications have had limited penetration of the market and are still looking for traction.
What challenges do U.S. healthcare providers face when implementing AI technologies, such as privacy or bias issues?
The largest challenges to implementing AI in healthcare are as follows:
i. Regulation – The FDA seeks to regulate AI as a medical device. This may be appropriate for machine learning applications, but the regulatory framework does not really work for LLM based applications. LLMs do not follow a prescribed path in finding answers. Rather, they use prompts that tell them how to evaluate a problem and then apply probabilistic modeling to come to the best answer. This is very much like the way humans use successive approximation to get to an answer. From a regulatory approach, we need to treat LLMs more like humans than software….grading them based on their propensity for getting the right answer, rather than on a requirement for never getting the wrong answer. The current misguided approach to LLM regulation is keeping life-saving technology out of the hands of doctors and patients.
ii. Fear – Providers, Nurses, Medical Assistants, Clerks, etc. are all very worried that AI will take their job. Whether expressed as a reluctance to use the application, bravado about how AI can never measure up, or anger direct4ed towards the “suits” who are “pushing” AI on them, the result is the same – delay and distraction to forestall the inevitable dissemination of AI technology in healthcare.
iii. Privacy Issues – The most cutting edge LLMs (so-called frontier models) are not PHI compliant. There are LLMs that can be run on premises and in a PHI compliant manner; but they tend to be about 1 generation behind in abilities. This is delaying the best technology from being used in clinical applications.
iv. LLM hallucinations – Everyone knows that LLMs “make up facts”. This is a feature, not a bug. It is what allows LLMs some level of creativity. Unfortunately, healthcare can’t afford this level of inaccuracy. So there is hesitation to use LLMs in clinical practice.
What is not clearly communicated, is that people also make stuff up every day. Doctors, nurses, etc. make errors of commission and omission every day as their fallible memories grasp for the correct facts they need to do their job. LLMs are much more accurate by comparison. This is another reason LLMs should be judged against “are they better than humans” than against perfection.
Hallucinations can be minimized or nearly eliminated using some simple programming. While widely cited, they should not be a reason for not using AI.
How is AI helping to reduce costs and improve efficiency in the U.S. medical industry?
AI is automating tasks that are not value-added. This is mostly true in the administrative space at the moment, but can be done in the clinical arena as well. AI will soon be used to allow clinicians and staff to scale. Currently, healthcare’s most valuable asset is its people; but they are non-scalable.
Everything is done as a one-to-one interaction that has changed little for the past hundreds or thousands of years. Now, AI can be that partner we need to do parts of the interactions and allow more of a one-to-many approach between clinicians and patients.
What regulations exist in the U.S. to ensure the ethical and safe use of AI in healthcare?
There are many laws and regulations related to AI in medicine, but here are the main ones:
i. FDA Regulation of Medical Devices & Software as a Medical Device (SaMD). If an AI tool qualifies as a medical device (or falls under Software as a Medical Device), the U.S. Food and Drug Administration (FDA) has authority under the Federal Food, Drug, and Cosmetic Act. Such tools must satisfy requirements for safety, effectiveness, possibly clinical trials, ongoing monitoring, risk mitigation, etc.
ii. 21st Century Cures Act (2016). Impacts software regulation among other things: defines how medical software is regulated and provides a framework for what counts as a medical device. Also addresses electronic health record (EHR) data, interoperability, and “information blocking” which is relevant when AI systems rely on aggregated health data.
iii. Health Insurance Portability and Accountability Act (HIPAA). Protects privacy and security of patient health data. Any AI system that processes protected health information (PHI) must comply with HIPAA’s rules regarding consent, use, and disclosure. Also, issues of de-identification of data and re-identification risk are relevant.
iv. Executive Orders & Guidance. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI (2023) directed HHS (Health and Human Services) to develop a strategic plan for AI in health and human services, and to issue an “AI assurance policy” for how AI tools should be evaluated in that context. There are also agency-level guidance documents (from HHS, FDA, etc.) about best practices for AI development: transparency, bias mitigation, validation, monitoring.
What is the projected growth of AI in the American healthcare sector over the next 5–10 years?
The growth of AI in the American healthcare is exponential. In 2023, the US Healthcare AI segment was estimated at about $12 billion. In 2024, this had grown to an estimated $26 billion. This is estimated to grow to over $110 billion by 2030. This is the most conservative estimate, with some estimates being over $300 billion. Nearly every company in healthcare is rushing to implement AI in some manner.



Comments