A Call for Peer Re-Reviews of Articles on Covid Vaccines

A Call for Peer Re-Reviews of Articles on Covid Vaccines
by Eyal Shahar at Brownstone Institute

A Call for Peer Re-Reviews of Articles on Covid Vaccines

In the years that I served as an associate editor (for The American Journal of Epidemiology), I have seen the entire spectrum of “peer reviews” — from meticulous, thoughtful critiques whose authors evidently invested several hours in the task to sketchy reviews that reflected carelessness and incompetence. I have read friendly reviews by admirers of the authors and hostile reviews by their enemies. (It is not difficult to tell from the tone.) In the practice of science, human beings still behave like human beings.

Matters got worse during the pandemic. Studies that praised the Covid vaccines were quickly certified “peer-reviewed,” whereas critical, post-publication peer review was suppressed. As a result, we now have a historical collection of published poor science. It cannot be erased, but it is time to start correcting the record.

Biomedical journals are not the platform. First, there is no formal section for open peer reviews of articles that were published long ago. Second, editors have no interest in exposing falsehoods that were published in their journals. Third, the censorship machine is still in place. So far, I was able to break it only once, and it wasn’t easy.

So, how can we try to correct the record, and where?

Let me make a suggestion to my colleagues in epidemiology, biostatistics, and related methodological fields who preserved their critical thinking during the pandemic. Choose one article or more about the Covid vaccines and submit your peer review to Brownstone Journal. If it is interesting and well-written, there is a good chance it will be posted. I advise cherry-picking: find those peer-reviewed articles that irritated you most, either because they were pure nonsense or because the correct inference was strikingly different. And if you posted short critiques on Twitter (now X) or thorough reviews on other platforms, expand, revise, and submit them to Brownstone. Perhaps we can slowly create an inventory of critical reviews, restoring some trust in the scientific method and in biomedical science.

Here is an example.

A Review and a Re-Analysis of a Study in Ontario, Canada

Published in the British Medical Journal in August 2021, the paper reported the effectiveness of the mRNA vaccines in early 2021, shortly after their authorization.

This research was typical of vaccine studies from that time. Effectiveness was estimated in a “real-world” setting; namely, an observational study during a vaccination campaign. The study period (mid-December 2020 through mid-April 2021) included the peak of a Covid winter wave in early January. We’ll discuss later a strong bias called confounding by background infection risk.

The design was a variation of the case-control study, the test-negative design. Eligible subjects underwent a PCR test because of Covid-like symptoms. Cases tested positive; controls tested negative. As usual, odds ratios were computed, and effectiveness was computed as 1 minus the odds ratio (expressed in percent). The sample size was large: 53,270 cases and 270,763 controls.

Source: part of figure 1 in the article

The authors reported the following key results (my italics):

Vaccine effectiveness against symptomatic infection observed ≥14 days after one dose was 60% (95% confidence interval 57% to 64%), increasing from 48% (41% to 54%) at 14–20 days after one dose to 71% (63% to 78%) at 35–41 days. Vaccine effectiveness observed ≥7 days after two doses was 91% (89% to 93%).

Like almost every study of effectiveness, the authors discarded early events. As explained elsewhere, this practice introduces a bias called immortal time, or case-counting window bias. Not only does it obscure possible early harmful effects, but it also effectively leads to overestimation of effectiveness. RFK, Jr. alluded to this bias in non-technical terms (see video clip).

The correct approach is simple. We should estimate effectiveness from the administration of the first dose to later timepoints (built-up immunity). My table below shows the study data and the results of the new analysis. Each row shows the computation of effectiveness by the indicated day.

Effectiveness was negative by the end of the first two weeks after the first dose and had reached about 30% before a second dose, not 70%. It has reached only about 50% by the time of full immunity, not 90%. Although my estimates are unadjusted, eTable 2 (supplementary material) indicates that adjustment has hardly changed the authors’ estimates.

My results are still biased, however, by what I called earlier “confounding by background infection risk.”

The figure below was taken from the website of Public Health Ontario. The black line shows the 7-day rolling average of new cases. I added red lines that show the study period, divided into two intervals. I also added estimates of the number of vaccinated people in each interval.

The first interval, which contained the peak of the winter wave, was a period of slow beginning of the vaccination campaign. At that time, the distribution of vaccination status was skewed toward non-vaccination, which means that non-vaccination status happened to coincide with a high probability of getting infected. In contrast, the background infection rate was lower in much of the second period, when several million people received the first dose. Only in mid-March did the number of new cases cross back the dashed line. In short, the inverse association between vaccination and infection was heavily confounded by time trends in the risk of an infection. Even a placebo injection would have appeared effective.

I cannot remove the bias, and it is strong. The true effectiveness, if any, is much smaller than the estimates I computed after the removal of immortal time bias. Whether it is 10% by six weeks or 20% makes no difference. That’s not a vaccine.

The authors used another case group: hospitalization or death. This data is subject not only to the previous biases but also to the healthy vaccinee bias. I cannot offer a correction, however. Most of the data for cases was suppressed due to small numbers, and the control group was wrong. They used “the same control group as for the first primary outcome analysis (ie, individuals with symptoms who tested negative for SARS-CoV-2).” That’s a violation of a basic principle of the test-negative design. Controls should have been hospitalized or deceased people who tested negative.

The following sentence reflects a misunderstanding of the regression model they fit. They write, “We used multivariable logistic regression models to estimate the odds ratio, comparing the odds of vaccination (my italics) between test positive cases and test negative controls (with unvaccinated people as reference group).” The dependent variable was case-control status (log odds of being a case). Technically, the odds of being a case (versus being a control) are compared, not the odds of vaccination.

Strangely, the posted supplemental material still carries the heading “CONFIDENTIAL — NOT FOR DISTRIBUTION, 5 AUG 2021.” Only in the Covid era can you find such sloppiness. We observe here biased (careless) handling of a paper that served the narrative.

I will end my review with a favorite topic of mine: nonsensical results.

The figure below displays estimates of effectiveness, as computed by the authors. The arrow points to a result that makes no sense. We do not expect any incremental benefit of the second dose within 6 days of the injection, yet effectiveness increased, almost reaching the estimate for the subsequent interval (7+ days). If the estimate for 0–6 days is clearly biased, why would we trust the next one?

Source: Figure 2 in the article

Epilogue

As I wrote at the beginning, we should try to correct the historical record. It’s a long road ahead, but as the proverb says, “A journey of a thousand miles begins with a single step.” I am particularly appealing to top-ranked methodologists who used to tear apart poor studies and criticize shaky methods. Most of them kept silent throughout the pandemic, probably fearing the consequences of challenging the “safe and effective” narrative.

Let’s start reading fearless reviews of studies that reported the remarkable effectiveness of Covid vaccines, which proved to be false. There is no shortage of problems to detect, highlight, and correct, if possible, with actual data or simulations:

If we don’t do this work, we will continue to read false, effectiveness-based estimates of lives saved. Was it close to 2.5 million, as some have claimed, or undetectable in mortality statistics, or possibly about zero? And will we ever get some answers from relevant trials?

Republished from Medium

A Call for Peer Re-Reviews of Articles on Covid Vaccines
by Eyal Shahar at Brownstone Institute – Daily Economics, Policy, Public Health, Society

Similar Posts