My Take: The panic around Utah’s AI prescription pilot is misplaced. The Doctronic pilot is narrow, supervised, and carries more oversight than the average rushed 7-minute appointment it’s replacing. Critics are comparing AI prescribing to an ideal that does not exist in practice.
I’ve been waiting for someone to read the actual terms of the Utah AI prescription pilot before declaring it a catastrophe. Most people haven’t.
Utah authorized Doctronic to use AI to renew prescriptions for 192 chronic condition medications, starting January 6, 2026. The immediate take across tech and health media: dangerous, premature, AI is not a doctor.
STAT News ran a piece asking whether the FDA should step in. The American Medical Association, implicitly at least, has consistently signaled that AI prescribing is outside the appropriate scope for current technology.
That position is wrong. Not in a nuanced “it has some merit but” way. Wrong as in: the critics are comparing this pilot to a fictional standard of care that does not exist for millions of American patients right now.

The Mainstream View (And Why It Falls Short)
The mainstream view is that AI should not be involved in prescribing because it lacks the clinical judgment, accountability, and patient relationship that safe prescribing requires. STAT News framed the core question as whether the FDA should regulate AI prescription tools before states start authorizing them.
This is a reasonable position in the abstract. In the concrete, it ignores what the Utah pilot does and what the alternative looks like for the patients it serves.
The pilot is renewals only. Not new prescriptions. Doctronic reviews refill requests for patients who are already on a stable regimen for hypertension, type 2 diabetes, or depression. These are not complex diagnostic situations. They are administrative ones. The patient’s doctor already decided years ago that this medication is appropriate. The refill is a checkbox.
The mainstream concern is pattern-matching to a scarier version of AI prescribing than this pilot represents. When critics invoke “AI prescribing,” they picture a chatbot diagnosing a chest pain patient and recommending a medication. That is not what happened in Utah. Utah approved an AI to renew metformin for someone who has been taking metformin for three years.
The mainstream view also ignores the specific safeguard structure that makes this pilot unusual. STAT News did not spend much space on the fact that a physician reviews the AI’s output for the first 250 patients. Or that the pilot excludes controlled substances, pain medications, ADHD drugs, and injectables entirely. Or that Doctronic carries malpractice insurance specifically covering the AI’s decisions, which has not been done before in a US regulatory context.
What’s Actually Happening
The Utah pilot is a regulatory sandbox experiment with tighter oversight than most routine clinical encounters, applied narrowly to a category of prescription work that functions as an administrative task in practice. The design is cautious. The scope is limited. The criticism is mostly about what this could eventually become, not what it is right now.
From what I’ve read of the pilot terms, the safeguards are not afterthoughts. They are structural requirements:
- Human physician review for the first 250 patients
- Automatic escalation to a licensed clinician for complex cases
- Prohibition on using patient data for any purpose other than the refill decision
- Monthly reporting to regulators covering refill numbers, denial rates, clinician escalations, and user complaints
- Malpractice insurance that explicitly covers AI decision-making
These are requirements that do not exist in a standard telehealth encounter. The average virtual appointment I’ve seen covered or reviewed does not come with mandatory monthly outcome reporting to a state regulator.
The Stanford Law analysis of the Utah experiment describes the oversight framework as “more structured than typical clinical auditing.” That is not how catastrophically designed policy gets described.
The 192-drug list is also notably conservative. The drugs approved for AI renewal are medications with well-established safety profiles for chronic use, prescribed by primary care physicians in straightforward maintenance scenarios.
The excluded categories, controlled substances, injectables, ADHD medications, and pain management drugs, are exactly the categories where human judgment and relationship history matter most. Someone made the right call on where to draw that line.
The Part Nobody Wants to Admit
The status quo for chronic care prescription renewals in the US is not a rigorous clinical safeguard. It is an overworked system where the actual clinical judgment has already been made, and the renewal is a friction point that harms patients without providing safety benefits.
Most American adults on a maintenance medication for a chronic condition have had this experience: your prescription expires, you need a refill, your doctor’s office has a three-week wait, and in the meantime you miss doses of medication you have been safely taking for years.
Primary care wait times in the US averaged over 26 days in urban markets as of 2025, according to research cited by Axios. The prescription renewal friction is not a safety feature. It is a byproduct of a system that does not have enough capacity.
The part nobody wants to admit is that when a patient with stable type 2 diabetes misses two weeks of metformin because they could not get a refill appointment in time, that is also a harm. The critics never add that to the ledger. They only count the potential harms from AI involvement, not the existing harms from the current system’s failure to serve patients.
The oversight in the Utah pilot means there is more visibility into what the AI is doing than there is into what an overworked physician’s medical assistant does when approving the same refill request.
Hot Take
The Utah AI prescription pilot will succeed, the data will be published, the oversight metrics will look better than critics expected, and the medical establishment will quietly expand the scope while continuing to insist AI prescribing is dangerous in public statements. Watch for the pattern: private adoption, public skepticism. It is how every disruptive clinical tool has moved through this system for the last thirty years.
