Better understanding of ‘ don ’ t confuse absence of evidence with evidence of absence '

| INTRODUCTION: In every single lecture or text book of Evidence-based Health-care and research synthesis, we invariably find references to the aphorism “absence of evidence does not mean absence of effect”. However, we have not rarely found those who forget, ignore or put this aphorism aside while interpreting results from research on healthcare. The objective of this text is to clarify the concepts underlying this aphorism, such as absence of evidence and absence of effect.

There are several clinical questions that have not been definitively solved by research in the most unbiased way 1,2 . This common situation can easily be exemplified by the scenarios below.
For the first scenario, consider that we can find a reasonable amount of 'empty' Cochrane systematic reviews, meaning that they did not include any randomized controlled trial (RCT) that fulfilled their inclusion criteria 2 . Although it seems a very disappointing finding for the decision maker and sometimes for the research team, well-developed 'empty reviews' play an essential role along the research process: they are a reliable evidence that until that moment, there is no answer, provided by a RCT, for that specific clinical question. That said, the implications for research of an 'empty review' is the starting point for designing future studies and prioritizing resources 3 .
For the second scenario, consider a systematic review including few studies that provide sparse data for an outcome of interest. This means that even when RCTs exist, there is the possibility that the data around the estimative of the effect is very scarce. Figure 1. Illustrates the second scenario: consider a meta-analysis that included two forged RCTs, "Alfa-trial" and "Beta-trial" comparing the effects of 'drug A' versus 'placebo' on mortality among patients with myocardial infarction (MI). This fixed-effect meta-analysis pooled both RCTs, totalizing 54 participants. At our first inspection, the diamond is crossing the null effect line and we say that there was no statistically significant difference between 'drug A' and placebo. However, our analysis should not be limited by the duality between significant and non-significant results. We are, usually, more interested in the estimative of the effect rather than the results of a hypothesis test.
When we now look to the width of the diamond (the limits of the confidence interval), we will verify how imprecise was this estimative. The point estimate was a Risk Ratio (RR) of 1.07, referring to an incremental risk of 7% in mortality for the patients that used Drug A. However, the limits of the confidence interval (95% CI 0.25 to 4.71) says that cannot rule out that this increment of the risk is significantly higher (371%) favoring placebo or that, in fact, we have a substantial risk reduction in the mortality (75%) in the intervention group.
To ignore the imprecision of the estimative (the width of the confidence interval) and only rely on the statistical significance is misleading and often make people assume that there is no evidence of effect, where in fact, there is no sufficient data to show us much about the effects: in this example, we do not even now the direction of the effect, since we cannot rule out an important benefit or not in the intervention administration.
In fact, non-statistically significant results sometimes say us different things about interventions. For a third scenario, consider two RCTs comparing the effects of drug 'B' and 'drug 'C' both versus placebo, again on mortality among patients with MI ( Figure 2). In the inspection of the forest plots, no statistically significant difference can be found, and the confidence intervals are crossing the null in both. Additionally, the point estimate from both are equal, a relative risk of 1. However, the precision of the estimative was very different. Because Gama-trial included very few participants (n = 36) and had a low number of events, the estimative was very imprecise and we cannot rule out an important reduction (84%) or an important increase (535%) in the risk of mortality at 28 days when using 'drug B'. Therefore, we do not have sufficient data, or evidence, to know even the direction of the effect (absence of evidence).
Conversely, the Zeta-trial included a higher number of participants (n = 556), and its CI was very narrow. The limits of the CI show us a risk reduction of 6% or an increase in 7%. If we consider this range as being of a little clinical importance, we may have data to say that we are confident that 'drug C' had little or no effect on mortality at 28 days. Therefore, we may have sufficient evidence that 'drug C' has no important benefit for patients with MI (evidence of absence).
The availability of data in an analysis is fundamental for interpreting the results of RCTs and systematic reviews. We should always look to the imprecision of the estimative to interpret correctly the results. On the other hand, it is not only because the data are available that we can be confident on the results. Point estimates and CI do not consider other aspects that are important when drawing conclusions about the effects of interventions, such as the risk of bias from the included RCTs.
The absence of evidence may be present in many forms while reading the report of a research. The clinical question might not had been properly studied so far (as in the case of 'empty reviews'), or there might be very few available data (as in the example of drugs 'A' and 'B').
Misinterpretation of the concept of absence of evidence could lead to an unjustifiable claim for offering unproven treatments because "there is no evidence that they are useless", inverting the principle of null hypothesis. Some practitioners could argue that according to their clinical experience, the treatment is "useful", promoting an unproven intervention in a clear reversal of the burden of proof. In a very interesting comment published in the British Medical Journal blog 4 , the author gave some examples in the clinical scenario that this could emerge. In a seminal textbook 5 , Chalmers et al. showed two reasons for why we mustn't offer unproven treatments. First, giving a treatment that doesn't work (or it's unproven) may distract us from those that work. Second, an unproven treatment could lead not only to waste of resources, but unexpected harms to the patients.
Another situation that may arise is the possibility of careless wording when writing down the conclusion of a study. Alderson et al. 6 analyzed the conclusions of 989 Cochrane´s Reviews published between 2001 and 2002. Inappropriate claims of no effect or no difference occurred in about a fifth of abstracts of Cochrane reviews, mainly because of mistakes related to wording rather than misinterpretation.
Although this topic was already discussed several times 7,8 , there still is a misconception in the literature about absence of evidence and evidence of absence. We hope that our discussion improves the report and interpretation of clinical research.

Conclusion
The absence of evidence of an effect and the evidence of absence of an effect are two different concepts that are important while reading and interpreting the results of a research. Focusing on statistical significance of an estimative may be misleading and the confidence on the estimative of the effect should be always considered.

Autor contributions
Riera R, Pacheco RL were responsible for paper conception and design. Pacheco RL, Fontes LES, Martimbianco ALC were responsible for drafting the paper. Riera R was responsible for critically revising the content. All authors revised and approved of the final version.

Competing interests
No financial, legal or political competing interests with third parties (government, commercial, private foundation, etc.) were disclosed for any aspect of the submitted work (including but not limited to grants, data monitoring board, study design, manuscript preparation, statistical analysis, etc.).