In scientific articles the key point is to communicate the numbers well


Handle
Bad scientists
The New England Journal of Medicine's study on respiratory vaccines is a comprehensive study and therefore deserves careful communication. A positive result, expressed along with its margin of uncertainty, is stronger than the rhetoric of those who claim vaccines are infallible, and more solid than the absolute convictions of those who reject them outright.
On the same topic:
The recently published New England Journal of Medicine article on respiratory vaccines is a wide-ranging, data-rich work that helps update the overall picture of the efficacy and safety of vaccines against COVID-19, influenza, and respiratory syncytial virus (RSV). For this very reason, however, it deserves careful communication.
Although the authors themselves state clear limitations—only 12 percent of the included studies were controlled trials, more than half had at least a moderate risk of bias, and the GRADE method was not applied to assess the overall quality of the evidence—the efficacy results are robust. Updated mRNA vaccines reduce the risk of hospitalization from COVID-19 by about half in adults and by 56 percent in the elderly; in immunocompromised individuals, protection drops to 37 percent. Influenza vaccines reduce hospitalizations by 60–70 percent in children and by 40–50 percent in adults, and new vaccines or antibodies against RSV reduce hospitalizations in newborns by up to 80 percent, although the data currently cover only one viral season.
The most delicate point is the interpretation of safety results, for example in pregnancy. In the comments following the publication, the term "no association" was often interpreted as "association ruled out." However, in statistical language, this is incorrect. Saying "no association" does not mean that the risk has been ruled out, but rather that it is not possible, with the measurements taken, to make reliable decisions about the size of the effect being assessed, with the chosen level of statistical significance. In other words, the available data do not allow us to state whether or not an effect exists, nor to precisely define its size. It is a technical formula indicating that, with the available evidence and the desired level of reliability, the effect cannot be determined with certainty. To fully understand this point, it is necessary to clarify what the p-value and the so-called "confidence interval" represent. The latter term, however traditional, can be misleading, as it seems to allude to a margin of personal confidence. In reality, in the case we're interested in here— that of evaluating the safety of a treatment—what this interval shows is the set of relative risk values compatible with the collected data. For this reason, today it's more correct to speak of a compatibility interval: it doesn't measure the researcher's confidence, but the consistency between risk hypotheses and observations.
When risk, as in clinical practice, is expressed by a ratio—such as a relative risk (RR), odds ratio (OR), or hazard ratio (HR)—interpreting the meaning of the compatibility interval follows a precise rule. If it's all below 1, the data are compatible exclusively with the hypothesis of a reduced risk; if it's all above 1, it's an increase; if it crosses 1, it means the data are compatible with both the absence of a difference between those treated and those not treated, and with the possibility of a real variation in risk (positive or negative), which however has not been demonstrated with the chosen level of confidence. The p-value, in turn, measures how likely it would be to obtain effects equal to or more extreme than those observed by pure chance if, in reality, there were no difference between the treated and untreated groups. If it's less than the conventional threshold of 0.05, the data are not very compatible with the hypothesis of a no-effect relationship; if it's greater, it means that, with that confidence level, we cannot rule out the possibility that the difference is due to chance. But this is not to say that the effect does not exist: it simply indicates that the evidence does not, for now, allow us to precisely estimate its magnitude or direction.
Let's apply these considerations to the NEJM work, for example, in the case of maternal vaccination against RSV with the RSVpreF vaccine. For preterm birth, the article reports three main estimates: a relative risk of 1.01 with a range of 0.89 to 1.15; another of 1.20 with a range of 0.98–1.46; and an odds ratio of 1.03 with a range of 0.55–1.93. All cross the value of 1, and therefore the p-values are greater than 0.05. With the confidence threshold adopted, the data do not allow us to affirm the existence of an increased risk, but neither to completely exclude it. In the case of 1.20 (0.98–1.46), for example, the results are compatible with both no difference and an increase in risk of up to 46%. “No association,” or even “non-significant p,” therefore, does not imply that the risk is zero, but that the available data do not allow, with the level of statistical precision adopted, to make reliable decisions about the consistency of the effect being investigated.
The same type of reasoning applies to mRNA vaccines during pregnancy. For spontaneous abortion, for example, the adjusted odds ratios are close to 1, with wide ranges: one study reports 0.97 (0.57–1.66), another 0.59 (0.29–1.19). Here too, the ranges cross 1, meaning the data are compatible with either no difference, a small reduction, or a small increase in risk. For preterm birth, some estimates remain below 1 with ranges that do not include 1 (for example, 0.7–0.8), suggesting a possible risk reduction; others, however, include 1 and therefore do not allow us to conclude whether there is an effect or not. The same logic applies to the co-administration of vaccines. Most studies show adequate immune responses when COVID-19, influenza, and RSV vaccines are administered together; In some cases, however, the analyses fail to clearly demonstrate that the response is exactly the same as that obtained by administering them separately. Here too, a p-value above 0.05 does not mean that the combined immune response is inferior, but only that, with the chosen reliability threshold, the data are insufficient to declare complete equivalence. This is a call for precision, not a health alert.
Putting everything into perspective, the message that emerges is clear: respiratory vaccines consistently reduce the risk of severe disease; no solid signals of increased risk are observed for key outcomes during pregnancy, and favorable trends are noted for some vaccines, such as a lower incidence of preterm birth. However, for some outcomes and for some populations, the compatibility ranges remain wide: small increases or reductions in risk cannot be ruled out. Science, here, doesn't speak in black and white: it shows what appears probable and what remains to be clarified. The point is not to use this study to say that "everything is certain and the discussion is closed," nor to instill unfounded doubts. The point is to communicate the numbers well. A good result, expressed along with its margin of uncertainty, is more convincing than any invented certainty. It is stronger than the rhetoric of those who believe vaccines are infallible, and infinitely more solid than the absolute convictions of those who reject them outright.
Of course, some will argue that this language is reserved for scientists and that "clear" messages are needed on social media or in newspapers. But clarity doesn't come from simplification, if precision is lost, inducing unrealistic beliefs. Saying that an effect is unproven isn't a way to avoid taking a position, but to take one based on data. This isn't about prudence or ambiguity: it's about transparency. Communicating that positive results speak for themselves, even within their margin of uncertainty, is more effective than any slogan, because research doesn't need to shout certainties to be credible, but rather to tell the whole, comprehensible truth. Vaccines deserve to be defended for what they are: tools that truly reduce the risk of serious illness, within the margins of uncertainty that science measures and openly declares. Describing them this way, with rigor and honesty, is the best way to let the data speak for themselves and to preserve what truly gives science strength: the transparency in explaining not only what we know, but also how much and to what extent we know it.
This is the complete opposite of anti-vaccinationists who, without method, shout against vaccines and advance preconceived, unfounded, poorly formed, or even fraudulent arguments; let's, at least, try not to confuse ourselves with them.
More on these topics:
ilmanifesto




