The Safe and Meaningful Integration of AI with Medicine is Possible

December 2 | Posted by mrossol | AI, Math/Statistics, Medicine

Very interesting. The theory is similar to “The Toyota Way” with the aid of an AI tool.  My recollection is there is nothing in this piece which addresses how the AI tool is trained; i.e. does it flag an event after seeing 3 similar? 5 similar? Nevertheless, seems like this use of AI could change a few paradigms. mrossol

Source: The Safe and Meaningful Integration of AI with Medicine is Possible

In 2004, as a young Assistant Professor in the Department of Pathology in the School of Medicine at the University of Pittsburgh, I made a suggestion that, in retrospect, required neither radical technology nor prophetic genius. I proposed that a hospital system—one of the most advanced in the nation, boasting research budgets, satellite campuses, and a fleet of administrators whose titles barely fit on business cards—could do something so basic, so deeply moral, that it shook the institutional spine to its core: it could learn. Not just in theory. Not just in seminars or postmortems. But in practice, in real time, from its own decisions, its own outcomes, its own near-misses and unnoticed recoveries. The proposal was simple. Equip the system with memory. Allow it to reflect. Build a mirror that helped it learn.

The idea did not die because it lacked clarity. It died because it made too much sense. After I sent it to then-CEO Jeff Romoff, I was warned by my division chief, a creature with too many elbows, to never speak of it again, or I would lose my job. The message was unambiguous: anyone proposing a system trained to perform well and improve patient outcomes is a threat.

For years, that warning remained lodged beneath the surface of everything I saw. Each time a preventable error passed without remark, each time a clinician wondered aloud whether there might have been a better path, the silence became its own indictment. Medicine, for all its credentials and rituals, does not know what it does. It does not track. It does not measure. It guesses, defers, and moves on to keep up. It hides its outcomes in abstraction and deflects blame to avoid liability. It destroys every opportunity to improve through real-world learning.

In every other domain that touches complexity—aviation, engineering, even finance—systems learn. When a plane nearly falls from the sky, the event is dissected across continents. Pilots receive private report cards. When a patient crashes post-op, the paperwork is filed, the chart closed, and the data forgotten. The hospital moves forward as though nothing has happened. There is no reckoning, only the performance of one.

Ask any institution: how many of your readmissions last month were preventable? Which of your surgeons consistently produces the best long-term outcomes? How many medication combinations led to harm that could have been anticipated? The answers will not come in numbers. They will come in stories, evasions, or silence with shaking heads. Somewhere right now, the worst-performing kidney transplant specialist in the United States is operating on a patient. That surgeon does not know their standing. Shouldn’t they know? Shouldn’t they want to know? And we tolerate this.

But we should not.

For a long while, the idea existed only in fragments. Whispered among reformers. Sketched in conference margins. But the world changed. The volume of data grew too vast to ignore. Algorithms began diagnosing tumors more accurately than generalists. Pandemic dashboards mapped outbreaks before agencies did. The myth that healthcare could remain artisanal in a computational age began to fray. And the core insight—that a hospital, if allowed to reflect, could become more than a place of procedure—began to take root again.

Total Health Outcome Awareness—THOA, a phrase I first uttered in 2004 — should be designed to offer something medicine never permitted itself to hold: a conscience. Not a judgmental ledger. Not a bureaucratic surveillance tool. A conscience in the truest sense: a capacity for reflection, refinement, and humility. Paired with it are Private Report Cards (PRCs) for physicians and nurses, not the crude metrics that humiliate or penalize, but the gentler clarity of comparison. These reports will show, without shame and in private, how a physician’s decisions align or do not align with outcomes from colleagues facing similar cases based on health outcomes. Days to wellness. Survivorship. Saved lives and limbs. They do not punish. They illuminate.

Hypothetical Use Cases

Consider Mrs. Ellis. Seventy-four. Still tending to her roses, still independent, still sharp. Her prescriptions, all properly filled, sat beside the breadbox. A quick review showed no red flags. But THOA had seen it before: that exact drug combination in elderly women led, all too often, to arrhythmia and sudden collapse. Not a theory—a pattern carved into the data of thousands of real cases. No clinical trial had detected the risk. The continuous ongoing analysis of real-world data made the connection, stated the hypothesis, and tested it. THOA learned it could predict with high accuracy these event. Since it could predict, it could prevent. Her doctor received a quiet alert. As did her pharmacist. As did she. A single adjustment was made. Mrs. Ellis went on with her day, never knowing how close she came to something irreversible.

David, recovering from surgery, looked fine. The chart was clean, the vitals steady. But the machine had no such ease. It had learned to read between the lines. Microscopic shifts in oxygen saturation, delta and momentum in blood pressure variability, and inflammatory markers began to hum a discordant chord that no human could hear unaided. Sepsis was brewing, silent and swift. THOA tapped gently. Intervention came early: IV antibiotics, ozone therapy and peroxide. David lived, not because someone had a hunch, but because the hospital, at last, had a memory.

Sofia was four years old. Her life-threatening asthma attacks came like clockwork. Emergency rooms knew her name. Her parents were advised on simple change—a filter, a follow-up, a housing referral— nothing helped. Each visit ended with a discharge and a hope that next time would come later rather than sooner. Then THOA had seen a strange correlation: her severe attacks happened 1-2 weeks after receiving a vaccine with aluminum. She skipped her next aluminum-containing vaccine, and was put on a mild aluminum chelation protocol. Her asthma ended, never to return.

When her mother brings her and her younger sister in for well-child visits, she is offered aluminum-free vaccines or denied one if no aluminum-free option exists because the screen informed the nurse practitioner of the system-learned patient-specific contraindication. Her sister is not contraindicated, and so the mother decides either to provide informed permission or not. It’s up to her.

Everyone in the office understands why a mother might want to err on the side of caution for the younger sister. No one calls her “anti-vax”; she is not removed from the office because the contraindication changes the math on “% vaccinated” in the office.

And in Atlanta, during an otherwise unremarkable winter, the system stirred. A smattering of cases, scattered across outpatient clinics: mild fevers, unproductive coughs, fatigue. No alarms. No press releases. But travel histories, clinical notes, and lab values formed a pattern THOA had learned to read. Something was coming. Clinics were warned. Hospitals ordered supplies. Staff trained. When the official alerts arrived, they were met not with panic, but with preparation. They confirmed with sequence-based nucleic testing: A new flu variant, more virulent than the last. They were ready with saline nebulizers with low-dose, food-grade peroxide and a bit of iodine. With physician approval, all patients over 60 were contacted by the AI and it suggest that they increase their Vitamin C, Vitamin D, Vitamin A and to take 4-5 drops of Lugol’s iodine in water salted with Celtic sea salt – and if they are medically frail, to avoid large holiday gatherings or limit their stay – all tactics learned by AI that, after all, do indeed help patients recover faster.

Share

These stories may seem like fiction. They are inevitabilities waiting for the infrastructure of sense.

But the adoption of such a system is not stalled by technical limits. It is stalled by existential ones. Because once we know, truly know, which actions save lives and which decisions cause harm—once we allow the mirror to remain in the room—we are no longer innocent. Every ignored alert, every unexamined outlier, every repeated mistake becomes a choice.

That is the line THOA dares to cross. It turns errors from mystery into event prevention. It also makes clear, like pilot reports from the FAA, who improves and who repeats. It makes visible the distance between care and its ideal. And it will not flatter those who prefer unmeasured performance to difficult truth.

Still, some will say this is the beginning of the end for medicine as we know it. That AI will take over. That doctors will become glorified functionaries in an algorithmic state. That we are building not tools, but tyrants. They are wrong.

THOA and the Private Report Card do not decide. They do not override judgment. They do not prescribe. They observe, suggest, and recall. They TEACH. At their core, they are built on supervised machine learning models trained on historical data—not to command, but to forecast with accuracy. They present probabilities, not mandates; recommendations, not rules. The physician remains the final voice, the last hand, the human will in the room. What these systems provide is not automation, but assistance—a second set of eyes that never tires, never forgets, never guesses blindly. And always, it is the physician/patient dyad who chooses.

The AI here is neither mythic nor mystical. It is a tool that listens, watches, and learns. It recognizes patterns invisible to the naked eye. It keeps track of time across thousands of cases. It sends a signal when the numbers whisper instead of scream. And that is all. The decisions—to act, to pause, to change course—remain in human hands. That distinction is not a technicality. It is the foundation of trust.

There is a quiet momentum now. Big medicine is imploding. On what platform will they compete for patients? Profitability? That has failed. Thus, MAHA. How above safety and efficacy – measured in the most unbiased manner possible?

Imagine once hospitals that once scoffed at reflection have reconsidered. A few have begun to listen—not to the vendors, not to the hype, but to their own data. They are discovering what it means to see clearly. In one corner of the country, a pilot hospital has implemented a rudimentary THOA prototype. Within six months, their early sepsis intervention rate improved by 89%. Medication error rates fell. And no one was fired for suggesting it.

Soon, this knowledge will be public. People will learn that such systems are possible. And when they do, they will begin to ask the questions institutions have long avoided. Why isn’t my hospital using this? Why did my mother receive the drug combination that had already harmed dozens like her? And doctors see how their peers do better, and why. The doctor with the worst outcomes can join the best. Why not? When these questions come, explanations will no longer suffice. Accountants and administrators of profit centers will have to lose their grip on for-profit medicine and serve the system with patient outcomes in mind.

Litigators will cite institutions that chose not to see. Insurers will demand outcome-based documentation and evidence of trying. Payers will pivot from billing codes to preventable harm reduction. Hospitals that do not learn will begin to hemorrhage trust, staff, and standing.

But this need not be a reckoning. It can be a rebirth.

The path forward is not punitive. It is redemptive. Imagine a young doctor, uncertain but sincere, receiving her first Private Report Card and seeing, with measured relief, that her instincts align with those near best. Tomorrow, she will change one thing. Or perhaps she learns that in one domain—complex pregnancies, perhaps—she struggles. Either way, she adjusts. She gets better. And no one needs to ever know but her.

How can that system not learn?

Imagine a chief of surgery realizing that a quiet, unassuming colleague has the best complication rate in the region. Imagine elevating based not on ego or tenure, but on evidence of rates of improvement on health outcomes. Imagine no longer having to guess.

The question, then, is not whether we can build such systems. It is whether we can bear to.

Medicine, at its best, is a moral art wrapped in empirical discipline. It must now become a learning organism.

The mirror is here. Machine learning is not some scary AI that will take your jobs. If doctors and nurses lose their jobs, it’s because they Made America Health Again. And the time has come to decide who will look, and who will turn away.

Change of this scale does not arrive as a single revelation, nor as an abrupt institutional volte-face, but rather as a gradual shift in posture: a movement from fear toward curiosity, from defensiveness toward responsibility, from rote practice toward reflective practice. And the seeds of that shift are always planted first by the individuals who sense, intuitively, that medicine’s great inertia is no longer sustainable.

One such shift begins not with a sweeping administrative decree, but with a lone clinician—a hospitalist in a midsized community hospital—who has grown tired of reading mortality review reports that repeat themselves like a tired refrain. The same missed signs. The same medication oversights. The same preventable spirals downward. He has no appetite for buzzwords, no interest in grand transformations, but he is desperate for something that could help him see patterns before they harden into tragedies. When he encounters THOA—not as a glossy sales pitch, but as a modest tool that sits on top of electronic health records, learning, constantly learning, offering small, steady glimpses of risk and noteworthy hints at improvements—he decides to try it. Quietly. Without fanfare. Within months he notices that her team intervenes earlier, communicates more precisely, and anticipates subtle deteriorations with a confidence that seems almost uncanny. His colleagues ask what changed. He answers, almost embarrassed: “We began to see what we used to overlook.”

Such adoption spreads not by mandate but by contagion—not viral, but moral. A chief quality officer takes note. A risk manager runs simulations showing that even a modest reduction in preventable harm reshapes the financial and ethical trajectory of the institution. A residency director sees how PRCs could mentor young physicians with gentleness rather than reprimand. None of them use the word “revolution,” yet all recognize the quiet transformation underfoot.

And as this transformation gathers force, another truth becomes impossible to ignore: that supervised machine learning is not the spectral intelligence of dystopian imagination, but the natural extension of clinical vigilance. It is the stethoscope of the twenty-first century—not a replacement for judgment, but a refinement of perception. The tool does not command. It does not overrule. It does not determine. It reflects. And in reflecting, it reminds clinicians of something they once believed wholeheartedly but had nearly forgotten under the crush of modern practice: that insight is not a luxury. It is a duty.

Still, the question arises: why trust this tool? Why trust the mirror, when the mirror has been absent for so long? The answer lies not in the machinery of computation, but in the architecture of the system itself. THOA is bounded. Supervised. Transparent. It is designed not to widen autonomy but to narrow blind spots. Every prediction is accompanied by its rationale—laboratory markers trending in familiar directions, vital signs fluctuating in recognizable choreography, medication histories aligning with archived patterns of risk. The physician sees not just the conclusion, but the contours of the reasoning. And so trust grows—not blindly, but proportionally.

This transparency matters not only for clinicians but for patients. For decades, the public has been told that medicine is science, yet has been shielded from the uncomfortable truth that the system often functions more like an unmeasured art. When the public learns—inevitably—that a hospital could know its own outcomes with clarity, that it could see what leads to harm and what leads to healing, that it could detect preventable patterns long before they coalesce into calamities, pressures will shift. The public will no longer accept vague assurances or abstractions. They will seek the security of reflection. They will demand it.

With that demand, a secondary transformation occurs: the legal landscape changes. Once a single hospital adopts THOA and can show, with clean and incontrovertible data, that preventable harm can be meaningfully diminished, hospitals that decline to adopt the same tools will face a new form of scrutiny. The standard of care evolves not because a regulator decrees it, but because reality reveals it. And reality has a way of becoming precedent.

Of course, transformation is never purely systemic. It is also personal. The Private Report Card becomes, in the hands of a conscientious clinician, something like a compass. It is not a ranking, not an indictment, but a quiet reckoning—a place where a surgeon can discover that her postoperative complication rate is slightly lower than she had believed, while her postoperative pain management lags behind national peers. Where an oncologist can realize, with both pride and humility, that his survival curves reflect decades of careful craft, but his communication scores do not. These insights are not humiliations. They are invitations. And because PRCs remain private—deeply, inviolably private—they become spaces for reflection untainted by reputational calculus.

As adoption progresses, the old fear that once silenced me in 2004—the fear that knowledge itself would disrupt the hierarchy—begins to unravel. Administrators realize that learning systems do not diminish authority but clarify it. Clinicians discover that transparency is not a threat but a relief. Patients, long accustomed to navigating a fog of uncertainty, perceive that the fog is thinning. And the entire system, which once recoiled from the idea of knowing too much, begins to understand that knowing more is the only path toward harming less.

Through all of this, the original moment remains. That closed-door meeting. That veiled threat. That reminder that institutions often resist not the unproven, but the undeniable. Yet what was once dismissed as dangerous begins to look, in hindsight, like the only ethical trajectory medicine can follow. Suppression, silence, rediscovery, adoption—this is the natural arc of truth in complex systems. It bends not toward convenience, nor toward tradition, but toward coherence.

And now we stand where that arc begins to rise. The mirror that once drew threats is being lifted in cautious hands. It reflects not blame, but possibility. Not judgment, but clarity. Not replacement of the human mind, but reinforcement of its finest instincts. The time has come for medicine not merely to act, but to understand its actions. To learn from itself in the same way it asks patients to learn from their bodies. With humility. With courage. With the unshakable awareness that the refusal to know, once a tolerated flaw, can no longer be defended.

The mirror is here. And this time, we do not turn away.

This system has been proposed to HHS via White papers. Copies of the white papers are available, just email info@ipak-edu.org subject line: White Papers.

James Lyons-Weiler, PhD is a 25-year veteran of biomedical and clinical research, the CEO and Founder of The Institute for Pure and Applied Knowledge, Founder of IPAK-EDU.org, and Editor-in-Chief of the high-impact peer-reviewed journal, Science, Public Health Policy & the Law.

Popular Rationalism is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Leave a comment

Thanks for reading Popular Rationalism! This post is public so feel free to share it.

Share

Thank you for being a subscriber to Popular Rationalism. For the full experience, become a paying subscriber. And check out our awesome, in-depth, live full semester courses at IPAK-EDU. Hope to see you in class!

Share

Leave a Reply

Verified by ExactMetrics