THE NUMBER THEY NEVER PUT IN THE HEADLINE

May 1 | Posted by mrossol | CDC NIH, Critical Thinking, Math/Statistics, Medicine

How to read any health study — and why the most important finding is almost never the one being reported

Source: THE NUMBER THEY NEVER PUT IN THE HEADLINE

JAMES LYONS-WEILER, PHD
MAY 1, 2026
 
We’ve seen it a thousand times. When pharma decides it wants to open a new market, it makes mountains out of molehills on efficacy. When that market is threatened by measured risk detectable as such, they start digging to bury it. Popular Rationalism has definitive, cited, in-depth articles on this widespread unethical practice – and calls to action on what we can all do about it to help end the abuse of science in the name of profit.


In 2022, a large clinical trial found that a widely-used statin reduced heart attacks by 36 percent in the study population. That number led the press releases. It appeared in headlines across every major news outlet. It was cited in physician discussions for the next two years.
The number that wasn’t in the headline: in absolute terms, 1.8 percent of the placebo group had a heart attack during the trial, compared to 1.1 percent of the treatment group. The 36 percent is a relative risk reduction. The actual reduction in your personal probability of a heart attack, if you are similar to the people in that trial, was 0.7 percentage points.
That is not a scandal. Relative risk reduction is a legitimate statistical measure. But it is a different measure than absolute risk reduction, and the two numbers tell very different stories about how much a drug is likely to help you. One of them appeared in the news. The other appeared in Table 2 of the supplementary materials.
This is the number they never put in the headline. And it is not an accident.

Why this happens — and why it keeps happening
Clinical trials are expensive. They are funded, in the majority of cases, by the companies whose products they are testing. The researchers who design trials are not fraudsters — most are doing careful, legitimate science. But the choices made during design, analysis, and reporting are not neutral. They are made by humans with careers, funding relationships, and institutional pressures. Those pressures do not produce falsified data. They produce choices.
The choice between reporting relative and absolute risk reduction is one such choice. Relative risk reduction almost always produces a larger, more impressive number. It is not wrong to report it. It is, however, common to report only it — which means the reader gets one frame, not the full picture.
The same dynamic appears across the most common features of health research: which outcome was chosen as the primary endpoint, how the control group was assembled, how long the follow-up ran, which subgroups were analyzed and which were not, which adverse events made the published table. None of these choices are invisible. They are all documented in the methods section. They are almost never in the abstract that gets summarized in the news.
The abstract is the press release. The methods section is the trial. Those are different documents, and they answer different questions.

The four questions that change everything

You do not need a PhD to read a clinical trial critically. You need four questions. These are not original to this publication — they are the standard tools of evidence-based medicine, published in peer-reviewed methodology literature. But they are almost never explained to the people who most need them: the patients, families, practitioners, and citizens who will live inside the decisions these trials produce.
Question 1: What was the absolute risk reduction?
Take the event rate in the control group and subtract the event rate in the treatment group. That number — not the relative reduction — tells you how much your actual risk changed. In many trials of widely-used drugs, this number is under 1 percent. Knowing it does not mean the drug is useless. It means you can make an informed decision about whether the benefit is worth the cost, the side effects, and the inconvenience.
Question 2: How was the control group constructed?
The ideal control group is a random sample of people identical to the treatment group in every way except the intervention being tested. In practice, many trials use active controls — comparing a new drug to an existing drug rather than to placebo — or use surrogate outcomes, measuring a biomarker rather than the clinical event that actually matters, or follow participants for periods too short to capture long-term effects. None of these are automatically disqualifying. But they change what the trial can and cannot tell you.
Question 3: Who funded it, and who analyzed it?
Industry-funded trials are more likely to report positive outcomes than independently funded trials. This is documented in the peer-reviewed literature — it is not a claim about fraud, it is a finding about the cumulative effect of design choices, publication decisions, and analysis framing. Knowing the funding source does not tell you the trial is wrong. It tells you to look more carefully at the design choices listed above.
Question 4: What happened to the people who left?
Dropout rates in clinical trials are often substantial. In trials of psychoactive medications, behavioral interventions, and dietary programs, many participants stop completing the protocol before the trial ends. How those dropouts are handled in the final analysis changes the result. If people who dropped out because of side effects are excluded from the adverse event calculation, the drug looks safer than it was. The methods section says how dropouts were handled. The abstract almost never does.

What this looks like in practice
In 2019, a widely-cited meta-analysis of antidepressant efficacy found that every major antidepressant worked better than placebo. The finding was reported as vindicating the drugs after years of controversy. What the coverage rarely mentioned: the average effect size across the 522 trials analyzed was 0.3 on a standardized scale — a difference that meets the threshold for statistical significance but falls below what many clinical researchers consider the threshold for clinical significance. The people in the treatment groups got somewhat better. Whether they got meaningfully better, in ways that affected their daily functioning, was a more complicated question than the headlines conveyed.
This is not a claim that antidepressants don’t work. They work for some people, in some clinical contexts, at some doses. It is a claim that “antidepressants work” and “antidepressants produce a statistically significant average effect across 522 heterogeneous trials” are different statements. The second statement, which is what the data showed, produces a more complicated picture for the individual patient trying to decide whether to start a medication.
That more complicated picture is what evidence-based medicine is supposed to deliver to patients. It is what physicians are supposed to translate. It is, in practice, what gets lost between the supplementary table and the headline.

Why this matters more now than ever
In the last five years, three things happened simultaneously. Public trust in health institutions declined sharply. Health misinformation proliferated on every platform. And the volume of published research increased faster than the tools available to evaluate it.
The standard response from mainstream science communication has been: trust the experts. Follow the consensus. Don’t read primary literature yourself — you’ll get it wrong.
That response has a problem. The consensus, at any given moment, is a summary of what most researchers currently believe based on available evidence. It is a useful signal. It is not infallible. Relative risk was reported as absolute risk in coverage of drugs that were later restricted or withdrawn. Surrogate endpoints were treated as clinical outcomes in trials whose findings did not replicate. Short follow-up windows missed effects that emerged at two or three years. These are not edge cases — they are documented patterns in the published methodology literature.
The alternative to “trust the experts” is not “trust no one.” It is: develop the minimum framework required to evaluate what the experts are citing. The four questions above are that framework. They take less than ten minutes to apply to any published abstract. They will not make you a clinical trialist. They will make you a better reader of the coverage that shapes your decisions.

What this publication exists to do
The four-question framework is a tool for individual articles. The pattern it reveals is systemic. It shows up in how drug approvals are structured, how dietary guidelines are produced, how environmental safety thresholds are set, how vaccine schedules are evaluated, how chronic disease mechanisms are understood — or not understood.
In each of these areas, there is a primary literature — the actual trials, the actual cohort studies, the actual mechanistic research — and there is a secondary literature of reviews, guidelines, and consensus documents that summarizes it. The secondary literature is what reaches physicians and patients. It is where the framing choices accumulate.
Popular Rationalism exists to read the primary literature and report what it says, with specific citations, in plain language. Not to replace clinical judgment. To provide the layer of analysis between the primary data and the summary that gets handed to you in a waiting room.
Several analyses published here turned out to describe mechanisms and findings that the mainstream literature confirmed years later. Those cases are documented with dates in the archive. That is not a boast — it is an invitation to check. The correct response to any source of health information, including this one, is to verify the claims. The citations are here.
Science doesn’t move forward by trusting the summary. It moves forward by checking the source.

How to use this publication
Most content here is free. The archive covers hundreds of analyses across nutrition, environmental toxicology, pharmaceutical safety, pediatric neurodevelopment, evolutionary medicine, and federal health policy — going back to 2020.
Upgrade to paid
If you are a physician or researcher: the analyses are written with the expectation that you will push back. The citations are there. Disagreement is welcome. We call it rational discourse. We need more of that.
If you are a patient, parent, or curious adult who has never read a methods section: start with the four questions. Apply them to the next health story you read. See what changes.
If you came here skeptical: good. Comment with opinion, impression, counterfactuals and outright correction. This is not out Popular Rationalismbeing right. It’s about all of us understanding as well as we can what we are doing to each other via medicine and public health – and what we could be doing that is far more humane.
Subscribe now. Paid is best to access all material. See something you like? Cross-post or share across social media. See something you don’t like? Have a go at comment. 
These are the correct first moves. 
— James Lyons-Weiler, PhD
Founder, Institute for Pure and Applied Knowledge. Editor-in-Chief, Science, Public Health Policy, and the Law. Unofficial federal health policy advisor.

Share

Leave a Reply

Verified by ExactMetrics