Did the CDC improperly block a study showing the COVID vaccines were effective?
May 7 | Posted by mrossol | CDC NIH, Math/Statistics, Medicine, Vaccine| STEVE KIRSCH MAY 7, 2026 | Steve Kirsch’s Newletter |

Executive summary
CDC acting director Jay Bhattacharya delayed and then blocked a COVID-19 vaccine effectiveness paper from appearing in the agency’s flagship scientific journal, Morbidity and Mortality Weekly Report (MMWR).
The NY Times had a field day with the story (here and here).
After studying the paper and reading the numerous articles written by people on both sides of the issue (such as the MD Reports analysis and Jeremy Faust’s substack), my opinion is that Jay was justified in his decision for the reasons he outlined to the Washington Post, namely, the CDC needs to be setting a high bar for quality research in the studies it publishes.
Test-negative design (TND) studies, like all observational methods, can have serious flaws. For example, there have been over 100 TND studies of the flu vaccine showing they work remarkably well. But they are all misleading. The more dispositive Simonsen papers (death and hospitalization) and the more recent Anderson discontinuity study (death and hospitalization) show the flu vaccines don’t reduce mortality or hospitalization.
Was this CDC paper the exception that got the right conclusion? I don’t think so.
Here are 25 unanswered questions about this study, e.g., why was there no negative control?
25 questions for the authors of the CDC paper
Methodological transparency
1. Your reference group of “unvaccinated for the 2025-2026 dose” includes people with prior COVID vaccinations (median ~1,200 days since last dose) and likely prior infection. How should readers interpret a 50-55% VE estimate as a measure of vaccine benefit when both arms have substantial pre-existing immunity?
2. The median interval since vaccination was 47 days — within the peak antibody titer window. Did you analyze how VE changes at 90, 120, or 180 days post-dose? If not, do you agree the headline VE estimate represents an upper bound on durable protection rather than a typical seasonal estimate?
3. Why didn’t you include negative-outcome controls (e.g., VE against trauma admission, non-respiratory hospitalization) to quantify healthy vaccinee bias? Was this considered and rejected, or not considered?
4. Healthy vaccinee bias has been documented to inflate apparent influenza VE by 30-50% in cohort studies (Jackson 2006, Simonsen 2007). Why doesn’t your limitations section name this specific phenomenon or cite this literature?
5. Your adjustment model includes age, sex, race/ethnicity, calendar time, and geographic region, but not frailty, comorbidity count, prior healthcare utilization, or functional status. Hospitalized patients had a median of 4 underlying conditions versus 0 for ED patients — why no comorbidity adjustment?
Power and pooling
6. Your hospitalization VE estimate rests on 60 vaccinated cases. What’s the smallest unmeasured confounder shift that would account for the entire 55% effect? Have you done sensitivity analysis for unmeasured confounding (e.g., E-values)?
7. The VISION Network has multiple years of accumulated data. Why didn’t you pool across seasons to power a death endpoint analysis? Was this considered?
8. Link-Gelles 2025 (JAMA Network Open) and Ma 2026 (JAMA Network Open) used multi-season pooling to estimate VE against critical illness. Why didn’t you reference those analyses or apply that approach here?
9. What is the all-cause 30-day mortality rate among the 1,022 hospitalized cases in your study? Why isn’t this reported, given that registry linkage to mortality data is feasible?
What the paper measures versus what gets claimed
10. Your paper measures VE against medically-attended laboratory-confirmed COVID-19. ACIP and CDC will likely cite this paper to support broad recommendations defended on mortality and severe-disease grounds. Are you concerned about that translation gap? How should the paper be cited and how should it not be cited?
11. Your conclusion uses the phrase “additional protection.” Did you consider more neutral framings like “reduced odds of test-positive presentation”? Why was “additional protection” chosen?
12. Your study cannot distinguish between (a) the vaccine causing biological protection, (b) vaccinated people being healthier and less likely to seek emergency care for COVID-like illness, or (c) some combination. Do you agree this is the case? If so, why isn’t this stated more prominently?
Comparison with prior CDC analyses
13. What were the equivalent VE estimates from VISION for the 2024-2025 booster, the 2023-2024 booster, and the bivalent booster? Has VE been consistent, declining, or improving across seasons? Why aren’t these comparisons included?
14. During the original Omicron BA.1 wave, when COVID hospitalizations were 10-20× higher than current rates, you would have had statistical power for death endpoints. Did CDC publish TND VE estimates against death during that period? If yes, what did they show? If no, why not?
The Bhattacharya objection specifically
15. Reports indicate Dr. Bhattacharya raised methodological concerns about this study before publication. What specifically were those concerns? Were any of them addressed in revisions? Were any methodologically valid concerns declined?
16. Did the author team consider any of the following analyses, and if so why were they not included: (a) negative-outcome controls, (b) brand-differential VE comparison, (c) registry-linked all-cause mortality follow-up, (d) waning analysis stratified by time since dose, (e) sensitivity analysis for unmeasured confounding?
17. The Health Department official quoted in the NYT said the data was already collected and the analysis already done, so changes weren’t possible. Could the limitations section have been expanded without re-analysis? Could additional sensitivity analyses have been added without changing the primary analysis?
On TND as a method
18. The 2023 Wiley paper “The test-negative design: Opportunities, limitations and biases” argued TND “cannot be used for studying the mortality effects of vaccines and is problematic for studies into the effect on hospitalization.” Do you agree or disagree? Why isn’t this critique cited or addressed?
19. Simonsen et al. (2005, 2007) demonstrated that pre-TND cohort studies of influenza VE produced apparent ~50% all-cause mortality reductions that were inconsistent with population-level mortality data. What evidence convinces you that current TND estimates aren’t subject to similar inflation, especially for severe-outcome endpoints?
20. If TND were systematically biased upward by 20-30 percentage points due to residual healthy vaccinee bias, how would CDC’s surveillance system detect this? What would falsify the current TND-based VE estimates?
Funding and conflicts
21. Several authors disclose institutional support from Pfizer, Moderna, Sanofi, GSK, and Novavax. The 2025-2026 vaccines studied are Pfizer, Moderna, and Novavax products. How do you respond to readers who view this as a conflict that warrants additional methodological transparency, not less?
22. This study was funded by CDC contracts to Westat (75D30121D12779) and Kaiser Foundation Hospitals (75D30123C17595). Does CDC retain editorial control over the paper’s framing and conclusions? Were there pre-publication disagreements between the author team and CDC about how to present the findings?
What would actually answer the policy question
23. What study design would you consider the gold standard for measuring whether 2025-2026 COVID vaccines reduce all-cause mortality in elderly adults? Why isn’t that study being conducted, and who would need to fund it?
24. The Czech national registry, UK ONS data, and several Nordic registries have linked vaccination and all-cause mortality data at population scale, with millions of person-years of follow-up. Has CDC analyzed any of these datasets, collaborated with their custodians, or attempted similar registry linkage in the US? If not, why not?
25. If a future analysis using population-registry data found no all-cause mortality benefit of the 2025-2026 booster in elderly adults, would that contradict your VE estimate? How would you reconcile the two findings?
AI analysis of the paper
For a much deeper dive into the paper, including the myriad of problems with TND, see this Claude analysis.
Summary
Bhattacharya made the right call here. Had the paper done things such as pooled numbers and had multiple negative controls (it had none), it would have been more convincing.
If hundreds of TND studies validated the truth about influenza vaccines as revealed in the Anderson paper, we’d have higher confidence the method is reliable. MedPage Today and the NY Times and Washington Post are all clueless as to how bad the methodology is.
Bottom line: Bhattacharya was correct in demanding high standards for CDC published studies. Good for him in doing the right thing. We need more people like Jay in public service.




Leave a Reply
You must be logged in to post a comment.