Skip to content
Snippets Groups Projects
rsabsence.Rnw 47.2 KiB
Newer Older
SamCH93's avatar
SamCH93 committed
\documentclass[9pt,lineno %, onehalfspacing
]{elife}
SamCH93's avatar
SamCH93 committed
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
SamCH93's avatar
SamCH93 committed
\usepackage[dvipsnames]{xcolor}
SamCH93's avatar
SamCH93 committed
\usepackage{doi}
\usepackage{tikz} % to draw schematics
\usetikzlibrary{decorations.pathreplacing,calligraphy} % for tikz curly braces
Rachel Heyard's avatar
Rachel Heyard committed
\usepackage{todonotes}
SamCH93's avatar
SamCH93 committed

SamCH93's avatar
SamCH93 committed
\definecolor{darkblue2}{HTML}{273B81}
\definecolor{darkred2}{HTML}{D92102}

SamCH93's avatar
SamCH93 committed
\title{Replication of ``null results'' -- Absence of evidence or evidence of
  absence?}
SamCH93's avatar
SamCH93 committed

\author[1*\authfn{1}]{Samuel Pawel}
\author[1\authfn{1}]{Rachel Heyard}
\author[1]{Charlotte Micheloud}
\author[1]{Leonhard Held}
\affil[1]{Epidemiology, Biostatistics and Prevention Institute, Center for Reproducible Science, University of Zurich, Switzerland}

\corr{samuel.pawel@uzh.ch}{SP}

\contrib[\authfn{1}]{Contributed equally}
SamCH93's avatar
SamCH93 committed


%% custom commands
\input{defs.tex}
\begin{document}
\maketitle

SamCH93's avatar
SamCH93 committed
% %% Disclaimer that a preprint
% \vspace{-3em}
% \begin{center}
%   {\color{red}This is a preprint which has not yet been peer reviewed.}
% \end{center}
SamCH93's avatar
SamCH93 committed

<< "setup", include = FALSE >>=
SamCH93's avatar
SamCH93 committed
## knitr options
library(knitr)
opts_chunk$set(fig.height = 4,
               echo = FALSE,
               warning = FALSE,
               message = FALSE,
               cache = FALSE,
               eval = TRUE)

## should sessionInfo be printed at the end?
Reproducibility <- TRUE

## packages
library(ggplot2) # plotting
SamCH93's avatar
SamCH93 committed
library(gridExtra) # combining ggplots
SamCH93's avatar
SamCH93 committed
library(dplyr) # data manipulation
library(reporttools) # reporting of p-values
SamCH93's avatar
SamCH93 committed

SamCH93's avatar
SamCH93 committed
## not show scientific notation for small numbers
options("scipen" = 10)

## the replication Bayes factor under normality
BFr <- function(to, tr, so, sr) {
    bf <- dnorm(x = tr, mean = 0, sd = so) /
        dnorm(x = tr, mean = to, sd = sqrt(so^2 + sr^2))
    return(bf)
}
## function to format Bayes factors
formatBF. <- function(BF) {
    if (is.na(BF)) {
        BFform <- NA
    } else if (BF > 1) {
        if (BF > 1000) {
            BFform <- "> 1000"
        } else {
            BFform <- as.character(signif(BF, 2))
        }
    } else {
        if (BF < 1/1000) {
            BFform <- "< 1/1000"
        } else {
            BFform <- paste0("1/", signif(1/BF, 2))
        }
    }
    if (!is.na(BFform) && BFform == "1/1") {
        return("1")
    } else {
        return(BFform)
    }
}
formatBF <- Vectorize(FUN = formatBF.)

## Bayes factor under normality with unit-information prior under alternative
BF01 <- function(estimate, se, null = 0, unitvar = 4) {
    bf <- dnorm(x = estimate, mean = null, sd = se) /
        dnorm(x = estimate, mean = null, sd = sqrt(se^2 + unitvar))
    return(bf)
}
SamCH93's avatar
SamCH93 committed
@

SamCH93's avatar
SamCH93 committed
\begin{abstract}
  In several large-scale replication projects, statistically non-significant
  results in both the original and the replication study have been interpreted
  as a ``replication success''. Here we discuss the logical problems with this
  approach. Non-significance in both studies does not ensure that the studies
  provide evidence for the absence of an effect and ``replication success'' can
  virtually always be achieved if the sample sizes of the studies are small
  enough. In addition, the relevant error rates are not controlled. We show how
  methods, such as equivalence testing and Bayes factors, can be used to
  adequately quantify the evidence for the absence of an effect and how they can
  be applied in the replication setting. Using data from the Reproducibility
  Project: Cancer Biology we illustrate that many original and replication
SamCH93's avatar
SamCH93 committed
  studies with ``null results'' are in fact inconclusive. We conclude that it is
  important to also replicate studies with statistically non-significant
  results, but that they should be designed, analyzed, and interpreted
  appropriately.
\end{abstract}
SamCH93's avatar
SamCH93 committed

% \rule{\textwidth}{0.5pt} \emph{Keywords}: Bayesian hypothesis testing,
%       equivalence testing, meta-research, null hypothesis, replication success}

% definition from RPCP: null effects - the original authors interpreted their
% data as not showing evidence for a meaningful relationship or impact of an
% intervention.
SamCH93's avatar
SamCH93 committed

SamCH93's avatar
SamCH93 committed
\section{Introduction}

SamCH93's avatar
SamCH93 committed
\textit{Absence of evidence is not evidence of absence} -- the title of the 1995
paper by Douglas Altman and Martin Bland has since become a mantra in the
statistical and medical literature \citep{Altman1995}. Yet, the misconception
that a statistically non-significant result indicates evidence for the absence
of an effect is unfortunately still widespread \citep{Makin2019}. Such a ``null
result'' -- typically characterized by a $p$-value of $p > 0.05$ for the null
hypothesis of an absent effect -- may also occur if an effect is actually
present. For example, if the sample size of a study is chosen to detect an
assumed effect with a power of 80\%, null results will incorrectly occur 20\% of
the time when the assumed effect is actually present. Conversely, if the power
of the study is lower, null results will occur more often. In general, the lower
the power of a study, the greater the ambiguity of a null result. To put a null
result in context, it is therefore critical to know whether the study was
adequately powered and under what assumed effect the power was calculated
\citep{Hoenig2001, Greenland2012}. However, if the goal of a study is to
explicitly quantify the evidence for the absence of an effect, more appropriate
SamCH93's avatar
SamCH93 committed
methods designed for this task, such as equivalence testing \citep{Wellek2010}
or Bayes factors \citep{Kass1995}, should be used from the outset.

% two systematic reviews that I found which show that animal studies are very
% much underpowered on average \citep{Jennions2003,Carneiro2018}

The contextualization of null results becomes even more complicated in the
setting of replication studies. In a replication study, researchers attempt to
repeat an original study as closely as possible in order to assess whether
similar results can be obtained with new data \citep{NSF2019}. There have been
various large-scale replication projects in the biomedical and social sciences
in the last decade \citep[among
others]{Prinz2011,Begley2012,Klein2014,Opensc2015,Camerer2016,Camerer2018,Klein2018,Cova2018,Errington2021}.
SamCH93's avatar
SamCH93 committed
Most of these projects reported alarmingly low replicability rates across a
broad spectrum of criteria for quantifying replicability. While most of these
projects restricted their focus on original studies with statistically
significant results (``positive results''), the \emph{Reproducibility Project:
  Psychology} \citep[RPP,][]{Opensc2015}, the \emph{Reproducibility Project:
  Experimental Philosophy} \citep[RPEP,][]{Cova2018}, and the
\emph{Reproducibility Project: Cancer Biology} \citep[RPCB,][]{Errington2021}
SamCH93's avatar
SamCH93 committed
also attempted to replicate some original studies with null results.

The RPP excluded the original null results from its overall assessment of
replication success, but the RPCB and the RPEP explicitly defined null results
in both the original and the replication study as a criterion for ``replication
success''. There are several logical problems with this ``non-significance''
criterion. First, if the original study had low statistical power, a
non-significant result is highly inconclusive and does not provide evidence for
the absence of an effect. It is then unclear what exactly the goal of the
replication should be -- to replicate the inconclusiveness of the original
result? On the other hand, if the original study was adequately powered, a
non-significant result may indeed provide some evidence for the absence of an
effect when analyzed with appropriate methods, so that the goal of the
replication is clearer. However, the criterion does not distinguish between
these two cases. Second, with this criterion researchers can virtually always
achieve replication success by conducting two studies with very small sample
SamCH93's avatar
SamCH93 committed
sizes, such that the $p$-values are non-significant and the results are
inconclusive. This is because the null hypothesis under which the $p$-values are
computed is misaligned with the goal of inference, which is to quantify the
evidence for the absence of an effect. We will discuss methods that are better
SamCH93's avatar
SamCH93 committed
aligned with this inferential goal. % in Section~\ref{sec:methods}.
Third, the criterion does not control the error of falsely claiming the absence
of an effect at some predetermined rate. This is in contrast to the standard
replication success criterion of requiring significance from both studies
\citep[also known as the two-trials rule, see chapter 12.2.8 in][]{Senn2008},
SamCH93's avatar
SamCH93 committed
which ensures that the error of falsely claiming the presence of an effect is
controlled at a rate equal to the squared significance level (for example,
$5\% \times 5\% = 0.25\%$ for a $5\%$ significance level). The non-significance
criterion may be intended to complement the two-trials rule for null results,
but it fails to do so in this respect, which may be important to regulators,
funders, and researchers. We will now demonstrate these issues and potential
solutions using the null results from the RPCB.


\section{Null results from the Reproducibility Project: Cancer Biology}
\label{sec:rpcb}

<< "data" >>=
## data
rpcbRaw <- read.csv(file = "../data/rpcb-effect-level.csv")
rpcb <- rpcbRaw %>%
    mutate(
        ## recompute one-sided p-values based on normality
        ## (in direction of original effect estimate)
        zo = smdo/so,
        zr = smdr/sr,
        po1 = pnorm(q = abs(zo), lower.tail = FALSE),
        pr1 = pnorm(q = abs(zr), lower.tail = ifelse(sign(zo) < 0, TRUE, FALSE)),
        ## compute some other quantities
        c = so^2/sr^2, # variance ratio
        d = smdr/smdo, # relative effect size
        po2 = 2*(1 - pnorm(q = abs(zo))), # two-sided original p-value
        pr2 = 2*(1 - pnorm(q = abs(zr))), # two-sided replication p-value
        sm = 1/sqrt(1/so^2 + 1/sr^2), # standard error of fixed effect estimate
        smdm = (smdo/so^2 + smdr/sr^2)*sm^2, # fixed effect estimate
        pm2 = 2*(1 - pnorm(q = abs(smdm/sm))), # two-sided fixed effect p-value
        Q = (smdo - smdr)^2/(so^2 + sr^2), # Q-statistic
        pQ = pchisq(q = Q, df = 1, lower.tail = FALSE), # p-value from Q-test
        BForig = BF01(estimate = smdo, se = so), # unit-information BF for original
        BForigformat = formatBF(BF = BForig),
        BFrep = BF01(estimate = smdr, se = sr), # unit-information BF for replication
        BFrepformat = formatBF(BF = BFrep)
    )

rpcbNull <- rpcb %>%
    filter(resulto == "Null")
SamCH93's avatar
SamCH93 committed

## check the sample sizes
## paper 5 (https://osf.io/q96yj) - 1 Cohen's d - sample size correspond to forest plot
## paper 9 (https://osf.io/yhq4n) - 3 Cohen's w- sample size do not correspond at all
## paper 15 (https://osf.io/ytrx5) - 1 r - sample size correspond to forest plot
## paper 19 (https://osf.io/465r3) - 2 Cohen's dz - sample size correspond to forest plot
## paper 20 (https://osf.io/acg8s) - 1 r and 1 Cliff's delta - sample size correspond to forest plot
## paper 21 (https://osf.io/ycq5g) - 1 Cohen's d - sample size correspond to forest plot
## paper 24 (https://osf.io/pcuhs) - 2 Cohen's d - sample size correspond to forest plot
## paper 28 (https://osf.io/gb7sr/) - 3 Cohen's d - sample size correspond to forest plot
## paper 29 (https://osf.io/8acw4) - 1 Cohen's d - sample size do not correspond, seem to be double
## paper 41 (https://osf.io/qnpxv) - 1 Hazard ratio - sample size correspond to forest plot
## paper 47 (https://osf.io/jhp8z) - 2 r - sample size correspond to forest plot
## paper 48 (https://osf.io/zewrd) - 1 r - sample size do not correspond to forest plot for original study
SamCH93's avatar
SamCH93 committed
\begin{figure}[!htb]
SamCH93's avatar
SamCH93 committed
<< "2-example-studies", fig.height = 3.25 >>=
SamCH93's avatar
SamCH93 committed
## some evidence for absence of effect https://doi.org/10.7554/eLife.45120 I
## can't find the replication effect like reported in the data set :( let's take
## it at face value we are not data detectives
## https://iiif.elifesciences.org/lax/45120%2Felife-45120-fig4-v1.tif/full/1500,/0/default.jpg
study1 <- "(20, 1, 1)"
## absence of evidence
study2 <- "(29, 2, 2)"
## https://iiif.elifesciences.org/lax/25306%2Felife-25306-fig5-v2.tif/full/1500,/0/default.jpg
plotDF1 <- rpcbNull %>%
    filter(id %in% c(study1, study2)) %>%
SamCH93's avatar
SamCH93 committed
    mutate(label = ifelse(id == study1,
                          "Goetz et al. (2011)\nEvidence of absence",
                          "Dawson et al. (2011)\nAbsence of evidence"))
SamCH93's avatar
SamCH93 committed
## ## RH: this data is really a mess. turns out for Dawson n represents the group
## ## size (n = 6 in https://osf.io/8acw4) while in Goetz it is the sample size of
## ## the whole experiment (n = 34 and 61 in https://osf.io/acg8s). in study 2 the
## ## so multiply by 2 to have the total sample size, see Figure 5A
## ## https://doi.org/10.7554/eLife.25306.012
## plotDF1$no[plotDF1$id == study2] <- plotDF1$no[plotDF1$id == study2]*2
## plotDF1$nr[plotDF1$id == study2] <- plotDF1$nr[plotDF1$id == study2]*2
SamCH93's avatar
SamCH93 committed
## create plot showing two example study pairs with null results
conflevel <- 0.95
ggplot(data = plotDF1) +
    facet_wrap(~ label) +
    geom_hline(yintercept = 0, lty = 2, alpha = 0.3) +
    geom_pointrange(aes(x = "Original", y = smdo,
                        ymin = smdo - qnorm(p = (1 + conflevel)/2)*so,
                        ymax = smdo + qnorm(p = (1 + conflevel)/2)*so), fatten = 3) +
    geom_pointrange(aes(x = "Replication", y = smdr,
                        ymin = smdr - qnorm(p = (1 + conflevel)/2)*sr,
                        ymax = smdr + qnorm(p = (1 + conflevel)/2)*sr), fatten = 3) +
    geom_text(aes(x = 1.05, y = 2.5,
SamCH93's avatar
SamCH93 committed
                  label = paste("italic(n) ==", no)), col = "darkblue",
              parse = TRUE, size = 3.8, hjust = 0) +
    geom_text(aes(x = 2.05, y = 2.5,
SamCH93's avatar
SamCH93 committed
                  label = paste("italic(n) ==", nr)), col = "darkblue",
              parse = TRUE, size = 3.8, hjust = 0) +
    geom_text(aes(x = 1.05, y = 3,
                  label = paste("italic(p) ==", formatPval(po))), col = "darkblue",
SamCH93's avatar
SamCH93 committed
              parse = TRUE, size = 3.8, hjust = 0) +
    geom_text(aes(x = 2.05, y = 3,
                  label = paste("italic(p) ==", formatPval(pr))), col = "darkblue",
SamCH93's avatar
SamCH93 committed
              parse = TRUE, size = 3.8, hjust = 0) +
    labs(x = "", y = "Standardized mean difference (SMD)") +
    theme_bw() +
    theme(panel.grid.minor = element_blank(),
          panel.grid.major.x = element_blank(),
          strip.text = element_text(size = 12, margin = margin(4), vjust = 1.5),
SamCH93's avatar
SamCH93 committed
          strip.background = element_rect(fill = alpha("tan", 0.4)),
          axis.text = element_text(size = 12))
@
\caption{\label{fig:2examples} Two examples of original and replication study
  pairs which meet the non-significance replication success criterion from the
  Reproducibility Project: Cancer Biology \citep{Errington2021}. Shown are
  standardized mean difference effect estimates with \Sexpr{round(conflevel*100,
SamCH93's avatar
SamCH93 committed
    2)}\% confidence intervals, sample sizes, and two-sided $p$-values for the
  null hypothesis that the standardized mean difference is zero.}
\end{figure}
SamCH93's avatar
SamCH93 committed
Figure~\ref{fig:2examples} shows standardized mean difference effect estimates
with \Sexpr{round(100*conflevel, 2)}\% confidence intervals from two RPCB study
pairs. Both are ``null results'' and meet the non-significance criterion for
replication success (the two-sided $p$-values are greater than 0.05 in both the
original and the replication study), but intuition would suggest that these two
pairs are very much different.

SamCH93's avatar
SamCH93 committed
The original study from \citet{Dawson2011} and its replication both show large
effect estimates in magnitude, but due to the small sample sizes, the
uncertainty of these estimates is very large, too. If the sample sizes of the
studies were larger and the point estimates remained the same, intuitively both
studies would provide evidence for a non-zero effect. However, with the samples
SamCH93's avatar
SamCH93 committed
sizes that were actually used, the results seem inconclusive. In contrast, the
effect estimates from \citet{Goetz2011} and its replication are much smaller in
magnitude and their uncertainty is also smaller because the studies used larger
sample sizes. Intuitively, these studies seem to provide some evidence for a
zero (or negligibly small) effect. While these two examples show the qualitative
difference between absence of evidence and evidence of absence, we will now
SamCH93's avatar
SamCH93 committed
discuss how the two can be quantitatively distinguished.
SamCH93's avatar
SamCH93 committed

SamCH93's avatar
SamCH93 committed
\section{Methods for assessing replicability of null results}
SamCH93's avatar
SamCH93 committed
\label{sec:methods}
There are both frequentist and Bayesian methods that can be used for assessing
evidence for the absence of an effect. \citet{Anderson2016} provide an excellent
summary of both approaches in the context of replication studies in psychology.
We now briefly discuss two possible approaches -- frequentist equivalence
testing and Bayesian hypothesis testing -- and their application to the RPCB
data.



\subsection{Equivalence testing}
Equivalence testing was developed in the context of clinical trials to assess
whether a new treatment -- typically cheaper or with fewer side effects than the
established treatment -- is practically equivalent to the established treatment
SamCH93's avatar
SamCH93 committed
\citep{Wellek2010}. The method can also be used to assess
SamCH93's avatar
SamCH93 committed
whether an effect is practically equivalent to the value of an absent effect,
usually zero. Using equivalence testing as a remedy for non-significant results
has been suggested by several authors \citep{Hauck1986, Campbell2018}. The main
challenge is to specify the margin $\Delta > 0$ that defines an equivalence
range $[-\Delta, +\Delta]$ in which an effect is considered as absent for
practical purposes. The goal is then to reject
the % composite %% maybe too technical?
null hypothesis that the true effect is outside the equivalence range. This is
in contrast to the usual null hypothesis of a superiority test which states that
SamCH93's avatar
SamCH93 committed
the effect is zero, see Figure~\ref{fig:hypotheses} for an illustration.
SamCH93's avatar
SamCH93 committed

SamCH93's avatar
SamCH93 committed
\begin{figure}[!htb]
SamCH93's avatar
SamCH93 committed
  \begin{center}
    \begin{tikzpicture}[ultra thick]
      \draw[stealth-stealth] (0,0) -- (6,0);
      \node[text width=4.5cm, align=center] at (3,-1) {Effect size};
      \draw (2,0.2) -- (2,-0.2) node[below]{$-\Delta$};
      \draw (3,0.2) -- (3,-0.2) node[below]{$0$};
      \draw (4,0.2) -- (4,-0.2) node[below]{$+\Delta$};

SamCH93's avatar
SamCH93 committed
      \node[text width=5cm, align=left] at (0,1.25) {\textbf{Equivalence}};
SamCH93's avatar
SamCH93 committed
      \draw [draw={darkred2},decorate,decoration={brace,amplitude=5pt}]
SamCH93's avatar
SamCH93 committed
      (2.05,0.75) -- (3.95,0.75) node[midway,yshift=1.5em]{\textcolor{darkred2}{$H_1$}};
SamCH93's avatar
SamCH93 committed
      \draw [draw={darkblue2},decorate,decoration={brace,amplitude=5pt,aspect=0.6}]
SamCH93's avatar
SamCH93 committed
      (0,0.75) -- (1.95,0.75) node[pos=0.6,yshift=1.5em]{\textcolor{darkblue2}{$H_0$}};
SamCH93's avatar
SamCH93 committed
      \draw [draw={darkblue2},decorate,decoration={brace,amplitude=5pt,aspect=0.4}]
SamCH93's avatar
SamCH93 committed
      (4.05,0.75) -- (6,0.75) node[pos=0.4,yshift=1.5em]{\textcolor{darkblue2}{$H_0$}};
SamCH93's avatar
SamCH93 committed

SamCH93's avatar
SamCH93 committed
      \node[text width=5cm, align=left] at (0,2.5) {\textbf{Superiority}};
SamCH93's avatar
SamCH93 committed
      \draw [decorate,decoration={brace,amplitude=5pt}]
SamCH93's avatar
SamCH93 committed
      (3,2) -- (3,2) node[midway,yshift=1.5em]{\textcolor{darkblue2}{$H_0$}};
      \draw[darkblue2] (3,1.95) -- (3,2.2);
SamCH93's avatar
SamCH93 committed
      \draw [draw={darkred2},decorate,decoration={brace,amplitude=5pt,aspect=0.6}]
SamCH93's avatar
SamCH93 committed
      (0,2) -- (2.95,2) node[pos=0.6,yshift=1.5em]{\textcolor{darkred2}{$H_1$}};
SamCH93's avatar
SamCH93 committed
      \draw [draw={darkred2},decorate,decoration={brace,amplitude=5pt,aspect=0.4}]
SamCH93's avatar
SamCH93 committed
      (3.05,2) -- (6,2) node[pos=0.4,yshift=1.5em]{\textcolor{darkred2}{$H_1$}};

      % \node[text width=5cm, align=left] at (0,5.5) {\textbf{Superiority  \\ (one-sided)}};
      % \draw [draw={darkred2},decorate,decoration={brace,amplitude=5pt,aspect=0.4}]
      % (3.05,5) -- (6,5) node[pos=0.4,yshift=1.5em]{\textcolor{darkred2}{$H_1$}};
      % \draw [draw={darkblue2},decorate,decoration={brace,amplitude=5pt,aspect=0.6}]
      % (0,5) -- (3,5) node[pos=0.6,yshift=1.5em]{\textcolor{darkblue2}{$H_0$}};

      \draw [dashed] (2,0) -- (2,0.75);
      \draw [dashed] (4,0) -- (4,0.75);
      \draw [dashed] (3,0) -- (3,0.75);
      \draw [dashed] (3,1.5) -- (3,1.9);
      % \draw [dashed] (3,3.9) -- (3,5);
SamCH93's avatar
SamCH93 committed
    \end{tikzpicture}
  \end{center}
  \caption{Null hypothesis ($H_0$) and alternative hypothesis ($H_1$) for
SamCH93's avatar
SamCH93 committed
    superiority and equivalence tests (with equivalence margin $\Delta > 0$).}
SamCH93's avatar
SamCH93 committed
  \label{fig:hypotheses}
\end{figure}

To ensure that the null hypothesis is falsely rejected at most
SamCH93's avatar
SamCH93 committed
$\alpha \times 100\%$ of the time, the standard approach is to declare
equivalence if the $(1-2\alpha)\times 100\%$ confidence interval for the effect
is contained within the equivalence range (for example, a 90\% confidence
interval for $\alpha = 5\%$) \citep{Westlake1972}, which is equivalent to two
one-sided tests (TOST) for the null hypotheses of the effect being
greater/smaller than $+\Delta$ and $-\Delta$ being significant at level $\alpha$
\citep{Schuirmann1987}. A quantitative measure of evidence for the absence of an
effect is then given by the maximum of the two one-sided $p$-values (the TOST
$p$-value). A reasonable replication success criterion for null results may
therefore be to require that both the original and the replication TOST
$p$-values be smaller than some level $\alpha$ (e.g., 0.05), or, equivalently,
that their $(1-2\alpha)\times 100\%$ confidence intervals are included in the
equivalence region (e.g., 90\%). In contrast to the non-significance criterion,
this criterion controls the error of falsely claiming replication success at
level $\alpha^{2}$ when there is a true effect outside the equivalence margin,
thus complementing the usual two-trials rule.

SamCH93's avatar
SamCH93 committed

\begin{figure}
  \begin{fullwidth}
<< "plot-null-findings-rpcb", fig.height = 8.25, fig.width = "0.95\\linewidth" >>=
SamCH93's avatar
SamCH93 committed
## compute TOST p-values
SamCH93's avatar
SamCH93 committed
## Wellek (2010): strict - 0.36 # liberal - .74
# Cohen: small - 0.3 # medium - 0.5 # large - 0.8
## 80-125% convention for AUC and Cmax FDA/EMA
## 1.3 for oncology OR/HR -> log(1.3)*sqrt(3)/pi = 0.1446
margin <- 0.74
conflevel <- 0.9
rpcbNull$ptosto <- with(rpcbNull, pmax(pnorm(q = smdo, mean = margin, sd = so,
                                             lower.tail = TRUE),
                                       pnorm(q = smdo, mean = -margin, sd = so,
                                             lower.tail = FALSE)))
rpcbNull$ptostr <- with(rpcbNull, pmax(pnorm(q = smdr, mean = margin, sd = sr,
                                             lower.tail = TRUE),
                                       pnorm(q = smdr, mean = -margin, sd = sr,
                                             lower.tail = FALSE)))
SamCH93's avatar
SamCH93 committed
## highlight the studies from Goetz and Dawson
SamCH93's avatar
SamCH93 committed
ex1 <- "(20, 1, 1)"
ind1 <- which(rpcbNull$id == ex1)
ex2 <- "(29, 2, 2)"
ind2 <- which(rpcbNull$id == ex2)
rpcbNull$id <- ifelse(rpcbNull$id == ex1,
                      "(20, 1, 1) - Goetz et al. (2011)", rpcbNull$id)
SamCH93's avatar
SamCH93 committed
rpcbNull$id <- ifelse(rpcbNull$id == ex2,
                      "(29, 2, 2) - Dawson et al. (2011)", rpcbNull$id)
SamCH93's avatar
SamCH93 committed
## create plots of all study pairs with null results in original study
ggplot(data = rpcbNull) +
    facet_wrap(~ id, scales = "free", ncol = 3) +
    geom_hline(yintercept = 0, lty = 2, alpha = 0.25) +
    ## equivalence margin
    geom_hline(yintercept = c(-margin, margin), lty = 3, col = 2, alpha = 0.9) +
    geom_pointrange(aes(x = "Original", y = smdo,
                        ymin = smdo - qnorm(p = (1 + conflevel)/2)*so,
SamCH93's avatar
SamCH93 committed
                        ymax = smdo + qnorm(p = (1 + conflevel)/2)*so),
                    size = 0.25, fatten = 2) +
    geom_pointrange(aes(x = "Replication", y = smdr,
                        ymin = smdr - qnorm(p = (1 + conflevel)/2)*sr,
SamCH93's avatar
SamCH93 committed
                        ymax = smdr + qnorm(p = (1 + conflevel)/2)*sr),
                    size = 0.25, fatten = 2) +
SamCH93's avatar
SamCH93 committed
    annotate(geom = "ribbon", x = seq(0, 3, 0.01), ymin = -margin, ymax = margin,
             alpha = 0.05, fill = 2) +
    labs(x = "", y = "Standardized mean difference (SMD)") +
    geom_text(aes(x = 1.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("italic(n) ==", no)), col = "darkblue",
              parse = TRUE, size = 2.3, hjust = 0, vjust = 2) +
    geom_text(aes(x = 2.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("italic(n) ==", nr)), col = "darkblue",
              parse = TRUE, size = 2.3, hjust = 0, vjust = 2) +
    geom_text(aes(x = 1.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("italic(p)",
                                ifelse(po < 0.0001, "", "=="),
                                formatPval(po))), col = "darkblue",
              parse = TRUE, size = 2.3, hjust = 0) +
    geom_text(aes(x = 2.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("italic(p)",
                                ifelse(pr < 0.0001, "", "=="),
                                formatPval(pr))), col = "darkblue",
              parse = TRUE, size = 2.3, hjust = 0) +
    geom_text(aes(x = 1.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("italic(p)['TOST']",
                                ifelse(ptosto < 0.0001, "", "=="),
                                formatPval(ptosto))),
SamCH93's avatar
SamCH93 committed
              col = "darkblue", parse = TRUE, size = 2.3, hjust = 0, vjust = 3) +
    geom_text(aes(x = 2.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("italic(p)['TOST']",
                                ifelse(ptostr < 0.0001, "", "=="),
                                formatPval(ptostr))),
SamCH93's avatar
SamCH93 committed
              col = "darkblue", parse = TRUE, size = 2.3, hjust = 0, vjust = 3) +
    geom_text(aes(x = 1.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("BF['01']", ifelse(BForig <= 1/1000, "", "=="),
SamCH93's avatar
SamCH93 committed
                                BForigformat)), col = "darkblue", parse = TRUE,
              size = 2.3, hjust = 0, vjust = 4.5) +
    geom_text(aes(x = 2.05, y = pmax(smdo + 2.5*so, smdr + 2.5*sr, 1.1*margin),
                  label = paste("BF['01']", ifelse(BFrep <= 1/1000, "", "=="),
SamCH93's avatar
SamCH93 committed
                                BFrepformat)), col = "darkblue", parse = TRUE,
              size = 2.3, hjust = 0, vjust = 4.5) +
SamCH93's avatar
SamCH93 committed
    coord_cartesian(x = c(1.1, 2.4)) +
    theme_bw() +
    theme(panel.grid.minor = element_blank(),
          panel.grid.major = element_blank(),
          strip.text = element_text(size = 8, margin = margin(3), vjust = 2),
SamCH93's avatar
SamCH93 committed
          strip.background = element_rect(fill = alpha("tan", 0.4)),
          axis.text = element_text(size = 8))
SamCH93's avatar
SamCH93 committed
@
\caption{Standardized mean difference (SMD) effect estimates with
SamCH93's avatar
SamCH93 committed
  \Sexpr{round(conflevel*100, 2)}\% confidence interval for the ``null results''
SamCH93's avatar
SamCH93 committed
  and their replication studies from the Reproducibility Project: Cancer Biology
  \citep{Errington2021}. The identifier above each plot indicates (original
  paper number, experiment number, effect number). Two original effect estimates
  from original paper 48 were statistically significant at $p < 0.05$, but were
  interpreted as null results by the original authors and therefore treated as
  null results by the RPCB. The two examples from Figure~\ref{fig:2examples} are
  indicated in the plot titles. The dashed gray line represents the value of no
  effect ($\text{SMD} = 0$), while the dotted red lines represent the
SamCH93's avatar
SamCH93 committed
  equivalence range with a margin of $\Delta = \Sexpr{margin}$, classified as
  ``liberal'' by \citet[Table 1.1]{Wellek2010}. The $p$-values $p_{\text{TOST}}$
  are the maximum of the two one-sided $p$-values for the effect being less than
  or greater than $+\Delta$ or $-\Delta$, respectively. The Bayes factors
  $\BF_{01}$ quantify the evidence for the null hypothesis
SamCH93's avatar
SamCH93 committed
  $H_{0} \colon \text{SMD} = 0$ against the alternative
  $H_{1} \colon \text{SMD} \neq 0$ with normal unit-information prior assigned
  to the SMD under $H_{1}$.}
\label{fig:nullfindings}
SamCH93's avatar
SamCH93 committed
\end{fullwidth}
SamCH93's avatar
SamCH93 committed
\end{figure}

SamCH93's avatar
SamCH93 committed
<< "successes-RPCB" >>=
ntotal <- nrow(rpcbNull)

## successes non-significance criterion
nullSuccesses <- sum(rpcbNull$po > 0.05 & rpcbNull$pr > 0.05)

## success equivalence testing criterion
equivalenceSuccesses <- sum(rpcbNull$ptosto <= 0.05 & rpcbNull$ptostr <= 0.05)
ptosto1 <- rpcbNull$ptosto[ind1]
ptostr1 <- rpcbNull$ptostr[ind1]
ptosto2 <- rpcbNull$ptosto[ind2]
ptostr2 <- rpcbNull$ptostr[ind2]

## success BF criterion
bfSuccesses <- sum(rpcbNull$BForig > 3 & rpcbNull$BFrep > 3)
@

SamCH93's avatar
SamCH93 committed
Returning to the RPCB data, Figure~\ref{fig:nullfindings} shows the standardized
mean difference effect estimates with \Sexpr{round(conflevel*100, 2)}\%
SamCH93's avatar
SamCH93 committed
confidence intervals for the 15 effects which were treated as quantitative null
results by the RPCB.\footnote{There are four original studies with null effects
  for which several internal replication studies were conducted, leading in
  total to 20 replications of null effects. As in the RPCB main analysis
  \citet{Errington2021}, we aggregated their SMD estimates into a single SMD
  estimate with fixed-effect meta-analysis.} Most of them showed non-significant
$p$-values ($p > 0.05$) in the original study, but there are two effects in
paper 48 which the original authors regarded as null results despite their
statistical significance. We see that there are \Sexpr{nullSuccesses}
``success'' (with $p > 0.05$ in original and replication study) out of total
\Sexpr{ntotal} null effects, as reported in Table 1 from~\citet{Errington2021}.
% , and which were therefore treated as null results also by the RPCB.

We will now apply equivalence testing to the RPCB data. The dotted red lines
represent an equivalence range for the margin $\Delta =
\Sexpr{margin}$, % , for which the shown TOST $p$-values are computed.
which \citet[Table 1.1]{Wellek2010} classifies as ``liberal''. However, even
with this generous margin, only \Sexpr{equivalenceSuccesses} of the
\Sexpr{ntotal} study pairs are able to establish replication success at the 5\%
level, in the sense that both the original and the replication 90\% confidence
interval fall within the equivalence range (or, equivalently, that their TOST
$p$-values are smaller than $0.05$). For the remaining \Sexpr{ntotal -
  equivalenceSuccesses} studies, the situation remains inconclusive and there is
no evidence for the absence or the presence of the effect. For instance, the
previously discussed example from \citet{Goetz2011} marginally fails the
criterion ($p_{\text{TOST}} = \Sexpr{formatPval(ptosto1)}$ in the original study
and $p_{\text{TOST}} = \Sexpr{formatPval(ptostr1)}$ in the replication), while
the example from \citet{Dawson2011} is a clearer failure
($p_{\text{TOST}} = \Sexpr{formatPval(ptosto2)}$ in the original study and
$p_{\text{TOST}} = \Sexpr{formatPval(ptostr2)}$ in the replication).



% We chose the margin $\Delta = \Sexpr{margin}$ primarily for illustrative
% purposes and because effect sizes in preclinical research are typically much
% larger than in clinical research.
The post-hoc determination of the equivalence margin is debateable. Ideally, the
margin should be determined on a case-by-case basis before the studies are
conducted by researchers familiar with the subject matter. One could also argue
that the chosen margin $\Delta = \Sexpr{margin}$ is too lax compared to margins
typically used in clinical research; for instance, in oncology, a margin of
$\Delta = \log(1.3)$ is commonly used for log odds/hazard ratios, whereas in
bioequivalence studies a margin of $\Delta =
\log(1.25) % = \Sexpr{round(log(1.25), 2)}
$ is the convention, which translates to $\Delta = % \log(1.3)\sqrt{3}/\pi =
\Sexpr{round(log(1.3)*sqrt(3)/pi, 2)}$ and $\Delta = % \log(1.25)\sqrt{3}/\pi =
\Sexpr{round(log(1.25)*sqrt(3)/pi, 2)}$ on the SMD scale, respectively, using
the $\text{SMD} = (\surd{3} / \pi) \log\text{OR}$ conversion \citep[p.
233]{Cooper2019}. Therefore, we report a sensitivity analysis in
Figure~\ref{fig:sensitivity}. The top plot shows the number of successful
replications as a function of the margin $\Delta$ and for different TOST
$p$-value thresholds. Such an ``equivalence curve'' approach was first proposed
by \citet{Hauck1986}, see also \citet{Campbell2021} for alternative approaches
to post-hoc equivalence margin specification. We see that for realistic margins
between 0 and 1, the proportion of replication successes remains below 50\%. To
achieve a success rate of 11 of the 15 studies, as with the RCPB
non-significance criterion, unrealistic margins of $\Delta > 2$ are required,
which illustrates the paucity of evidence provided by these studies.


\begin{figure}[!htb]
<< "sensitivity", fig.height = 6.5 >>=
## compute number of successful replications as a function of the equivalence margin
marginseq <- seq(0.01, 4.5, 0.01)
alphaseq <- c(0.005, 0.05, 0.1)
sensitivityGrid <- expand.grid(m = marginseq, a = alphaseq)
equivalenceDF <- lapply(X = seq(1, nrow(sensitivityGrid)), FUN = function(i) {
    m <- sensitivityGrid$m[i]
    a <- sensitivityGrid$a[i]
    rpcbNull$ptosto <- with(rpcbNull, pmax(pnorm(q = smdo, mean = m, sd = so,
                                                 lower.tail = TRUE),
                                           pnorm(q = smdo, mean = -m, sd = so,
                                                 lower.tail = FALSE)))
    rpcbNull$ptostr <- with(rpcbNull, pmax(pnorm(q = smdr, mean = m, sd = sr,
                                                 lower.tail = TRUE),
                                           pnorm(q = smdr, mean = -m, sd = sr,
                                                 lower.tail = FALSE)))
    successes <- sum(rpcbNull$ptosto <= a & rpcbNull$ptostr <= a)
    data.frame(margin = m, alpha = a,
               successes = successes, proportion = successes/nrow(rpcbNull))
}) %>%
    bind_rows()

## plot number of successes as a function of margin
nmax <- nrow(rpcbNull)
bks <- seq(0, nmax, round(nmax/5))
labs <- paste0(bks, " (", bks/nmax*100, "%)")
plotA <- ggplot(data = equivalenceDF,
                aes(x = margin, y = successes,
                    color = factor(alpha, ordered = TRUE))) +
    facet_wrap(~ 'italic("p")["TOST"] <= alpha ~ "in original and replication study"',
               labeller = label_parsed) +
    geom_vline(xintercept = margin, lty = 2, alpha = 0.4) +
    geom_step(alpha = 0.8, linewidth = 0.8) +
    scale_y_continuous(breaks = bks, labels = labs) +
    ## scale_y_continuous(labels = scales::percent) +
    guides(color = guide_legend(reverse = TRUE)) +
    labs(x = bquote("Equivalence margin" ~ Delta),
         y = "Successful replications",
         color = bquote("threshold" ~ alpha)) +
    theme_bw() +
    theme(panel.grid.minor = element_blank(),
          panel.grid.major = element_blank(),
          strip.background = element_rect(fill = alpha("tan", 0.4)),
          strip.text = element_text(size = 12),
          legend.position = c(0.85, 0.25),
          plot.background = element_rect(fill = "transparent", color = NA),
          ## axis.text.y = element_text(hjust = 0),
          legend.box.background = element_rect(fill = "transparent", colour = NA))

## compute number of successful replications as a function of the prior scale
priorsdseq <- seq(0, 40, 0.1)
bfThreshseq <- c(3, 6, 10)
sensitivityGrid2 <- expand.grid(s = priorsdseq, thresh = bfThreshseq)
bfDF <- lapply(X = seq(1, nrow(sensitivityGrid2)), FUN = function(i) {
    priorsd <- sensitivityGrid2$s[i]
    thresh <- sensitivityGrid2$thresh[i]
    rpcbNull$BForig <- with(rpcbNull, BF01(estimate = smdo, se = so, unitvar = priorsd^2))
    rpcbNull$BFrep <- with(rpcbNull, BF01(estimate = smdr, se = sr, unitvar = priorsd^2))
    successes <- sum(rpcbNull$BForig >= thresh & rpcbNull$BFrep >= thresh)
    data.frame(priorsd = priorsd, thresh = thresh,
               successes = successes, proportion = successes/nrow(rpcbNull))
}) %>%
    bind_rows()

## plot number of successes as a function of prior sd
plotB <- ggplot(data = bfDF,
                aes(x = priorsd, y = successes, color = factor(thresh, ordered = TRUE))) +
    facet_wrap(~ '"BF"["01"] >= gamma ~ "in original and replication study"',
               labeller = label_parsed) +
    geom_vline(xintercept = 4, lty = 2, alpha = 0.4) +
    geom_step(alpha = 0.8, linewidth = 0.8) +
    scale_y_continuous(breaks = bks, labels = labs, limits = c(0, nmax)) +
    ## scale_y_continuous(labels = scales::percent, limits = c(0, 1)) +
    labs(x = "Prior standard deviation",
         y = "Successful replications ",
         color = bquote("threshold" ~ gamma)) +
    theme_bw() +
    theme(panel.grid.minor = element_blank(),
          panel.grid.major = element_blank(),
          strip.background = element_rect(fill = alpha("tan", 0.4)),
          strip.text = element_text(size = 12),
          legend.position = c(0.85, 0.25),
          plot.background = element_rect(fill = "transparent", color = NA),
          ## axis.text.y = element_text(hjust = 0),
          legend.box.background = element_rect(fill = "transparent", colour = NA))

grid.arrange(plotA, plotB, ncol = 1)
@

\caption{Number of successful replications of original null results in
  the RPCB as a function of the margin $\Delta$ of the equivalence test
  ($p_{\text{TOST}} \leq \alpha$ in both studies) or the standard deviation of
  the normal prior distribution for the effect under the alternative $H_{1}$ of
  the Bayes factor test ($\BF_{01} \geq \gamma$ in both studies). The dashed
  gray lines represent the parameters used in the main analysis shown in
  Figure~\ref{fig:nullfindings}.}
\label{fig:sensitivity}
\end{figure}

SamCH93's avatar
SamCH93 committed

\subsection{Bayesian hypothesis testing}
The distinction between absence of evidence and evidence of absence is naturally
SamCH93's avatar
SamCH93 committed
built into the Bayesian approach to hypothesis testing. A central measure of
evidence is the Bayes factor \citep{Kass1995}, which is the updating factor of
the prior odds to the posterior odds of the null hypothesis $H_{0}$ versus the
SamCH93's avatar
SamCH93 committed
alternative hypothesis $H_{1}$
\begin{align*}
  \underbrace{\frac{\Pr(H_{0} \given \mathrm{data})}{\Pr(H_{1} \given
  \mathrm{data})}}_{\mathrm{Posterior~odds}}
  =  \underbrace{\frac{\Pr(H_{0})}{\Pr(H_{1})}}_{\mathrm{Prior~odds}}
SamCH93's avatar
SamCH93 committed
  \times \underbrace{\frac{p(\mathrm{data} \given H_{0})}{p(\mathrm{data}
  \given H_{1})}}_{\mathrm{Bayes~factor}~\BF_{01}}.
\end{align*}
SamCH93's avatar
SamCH93 committed
The Bayes factor quantifies how much the observed data have increased or
decreased the probability of the null hypothesis $H_{0}$ relative to the
SamCH93's avatar
SamCH93 committed
alternative $H_{1}$. If the null hypothesis states the absence of an effect, a
SamCH93's avatar
SamCH93 committed
Bayes factor greater than one (\mbox{$\BF_{01} > 1$}) indicates evidence for the
SamCH93's avatar
SamCH93 committed
absence of the effect and a Bayes factor smaller than one indicates evidence for
the presence of the effect (\mbox{$\BF_{01} < 1$}), whereas a Bayes factor not
much different from one indicates absence of evidence for either hypothesis
SamCH93's avatar
SamCH93 committed
(\mbox{$\BF_{01} \approx 1$}).

SamCH93's avatar
SamCH93 committed
When the observed data are dichotomized into positive (\mbox{$p < 0.05$}) or null
results (\mbox{$p > 0.05$}), the Bayes factor based on a null result is the
probability of observing \mbox{$p > 0.05$} when the effect is indeed absent
(which is $95\%$) divided by the probability of observing $p > 0.05$ when the
SamCH93's avatar
SamCH93 committed
effect is indeed present (which is one minus the power of the study). For
example, if the power is 90\%, we have
SamCH93's avatar
SamCH93 committed
\mbox{$\BF_{01} = 95\%/10\% = \Sexpr{round(0.95/0.1, 2)}$} indicating almost ten
times more evidence for the absence of the effect than for its presence. On the
other hand, if the power is only 50\%, we have
SamCH93's avatar
SamCH93 committed
\mbox{$\BF_{01} = 95\%/50\% = \Sexpr{round(0.95/0.5,2)}$} indicating only
slightly more evidence for the absence of the effect. This example also
SamCH93's avatar
SamCH93 committed
highlights the main challenge with Bayes factors -- the specification of the
SamCH93's avatar
SamCH93 committed
alternative hypothesis $H_{1}$. The assumed effect under $H_{1}$ is directly
related to the power of the study, and researchers who assume different effects
under $H_{1}$ will end up with different Bayes factors. Instead of specifying a
single effect, one therefore typically specifies a ``prior distribution'' of
plausible effects. Importantly, the prior distribution, like the equivalence
margin, should be determined by researchers with subject knowledge and before
the data are observed.

In practice, the observed data should not be dichotomized into positive or null
results, as this leads to a loss of information. Therefore, to compute the Bayes
factors for the RPCB null results, we used the observed effect estimates as the
data and assumed a normal sampling distribution for them, as in a meta-analysis.
The Bayes factors $\BF_{01}$ shown in Figure~\ref{fig:nullfindings} then
quantify the evidence for the null hypothesis of no effect
($H_{0} \colon \text{SMD} = 0$) against the alternative hypothesis that there is
SamCH93's avatar
SamCH93 committed
an effect ($H_{1} \colon \text{SMD} \neq 0$) using a normal ``unit-information''
SamCH93's avatar
SamCH93 committed
prior distribution \citep{Kass1995b} for the effect size under the alternative
$H_{1}$. There are several more advanced prior distributions that could be used
here, and they should ideally be specified for each effect individually based on
domain knowledge. The normal unit-information prior (with a standard deviation
of 2 for SMDs) is only a reasonable default choice, as it implies that small to
SamCH93's avatar
SamCH93 committed
large effects are plausible under the alternative. We see that in most cases
there is no substantial evidence for either the absence or the presence of an
effect, as with the equivalence tests. The Bayes factors for the two previously
discussed examples from \citet{Goetz2011} and \citet{Dawson2011} are consistent
SamCH93's avatar
SamCH93 committed
with our intuitions -- there is indeed some evidence for the absence of an
SamCH93's avatar
SamCH93 committed
effect in \citet{Goetz2011}, while there is even slightly more evidence for the
presence of an effect in \citet{Dawson2011}, though the Bayes factor is very
SamCH93's avatar
SamCH93 committed
close to one due to the small sample sizes. With a lenient Bayes factor
SamCH93's avatar
SamCH93 committed
threshold of $\BF_{01} > 3$ to define evidence for the absence of the effect,
SamCH93's avatar
SamCH93 committed
only \Sexpr{bfSuccesses} of the \Sexpr{ntotal} study pairs meets this criterion
in both the original and replication study.
SamCH93's avatar
SamCH93 committed
The sensitivity of the Bayes factor choice of the of the prior may again be
assessed visually, as shown in the bottom plot of Figure~\ref{fig:sensitivity}.
We see ....
SamCH93's avatar
SamCH93 committed

<< >>=
studyInteresting <- filter(rpcbNull, id == "(48, 2, 4)")
SamCH93's avatar
SamCH93 committed
noInteresting <- studyInteresting$no
nrInteresting <- studyInteresting$nr
## write.csv(rpcbNull, "rpcb-Null.csv", row.names = FALSE)
SamCH93's avatar
SamCH93 committed
Among the \Sexpr{ntotal} RPCB null results, there are three interesting cases
(the three effects from paper 48) where the Bayes factor is qualitatively
different from the equivalence test, revealing a fundamental difference between
the two approaches. The Bayes factor is concerned with testing whether the
effect is \emph{exactly zero}, whereas the equivalence test is concerned with
whether the effect is within an \emph{interval around zero}. Due to the very
SamCH93's avatar
SamCH93 committed
large sample size in the original study ($n = \Sexpr{noInteresting}$) and the
replication ($n = \Sexpr{nrInteresting}$), the data are incompatible with an
exactly zero effect, but compatible with effects within the equivalence range.
Apart from this example, however, the approaches lead to the same qualitative
conclusion -- most RPCB null results are highly ambiguous.
SamCH93's avatar
SamCH93 committed

\section{Conclusions}

SamCH93's avatar
SamCH93 committed
We showed that in most of the RPCB studies with ``null results'', neither the
original nor the replication study provided conclusive evidence for the presence
or absence of an effect. It seems logically questionable to declare an
inconclusive replication of an inconclusive original study as a replication
success. While it is important to replicate original studies with null results,
our analysis highlights that they should be analyzed and interpreted
appropriately.
SamCH93's avatar
SamCH93 committed
For both the equivalence testing and the Bayes factor approach, it is critical
SamCH93's avatar
SamCH93 committed
that the parameters of the procedure (the equivalence margin and the prior
SamCH93's avatar
SamCH93 committed
distribution) are specified independently of the data, ideally before the
studies are conducted. Typically, however, the original studies were designed to
find evidence for the presence of an effect, and the goal of replicating the
SamCH93's avatar
SamCH93 committed
``null result'' was formulated only after failure to do so. It is therefore
important that margins and prior distributions are motivated from historical
data and/or field conventions, and that sensitivity analyses regarding their
choice are reported \citet{Campbell2021}.
SamCH93's avatar
SamCH93 committed
While the equivalence test and the Bayes factor are two principled methods for
analyzing original and replication studies with null results, they are not the
only possible methods for doing so. For instance, the reverse-Bayes approach
from \citet{Micheloud2022} specifically tailored to equivalence testing in the
replication setting may lead to more appropriate inferences as it also takes
into account the compatibility of the effect estimates from original and
SamCH93's avatar
SamCH93 committed
replication studies. In addition, there are various other Bayesian methods which
could potentially improve upon the considered Bayes factor approach. For
example, Bayes factors based on non-local priors \citep{Johnson2010} or based on
interval null hypotheses \citep{Morey2011, Liao2020}, methods for equivalence
testing based on effect size posterior distributions \citep{Kruschke2018}, or
Bayesian procedures that involve utilities of decisions \citep{Lindley1998}.
Finally, the design of replication studies should align with the planned
analysis \citep{Anderson2017, Anderson2022, Micheloud2020, Pawel2022c}.
% The RPCB determined the sample size of their replication studies to achieve at
% least 80\% power for detecting the original effect size which does not seem to
% be aligned with their goal
SamCH93's avatar
SamCH93 committed
If the goal of the study is to find evidence for the absence of an effect, the
SamCH93's avatar
SamCH93 committed
replication sample size should also be determined so that the study has adequate
power to make conclusive inferences regarding the absence of the effect.
\section*{Acknowledgements}
SamCH93's avatar
SamCH93 committed
We thank the contributors of the RPCB for their tremendous efforts and for
SamCH93's avatar
SamCH93 committed
making their data publicly available. We thank Maya Mathur for helpful advice
with the data preparation. This work was supported by the Swiss National Science
SamCH93's avatar
SamCH93 committed
Foundation (grant \href{https://data.snf.ch/grants/grant/189295}{\#189295}).
\section*{Conflict of interest}
SamCH93's avatar
SamCH93 committed
We declare no conflict of interest.

\section*{Software and data}
The code and data to reproduce our analyses is openly available at
\url{https://gitlab.uzh.ch/samuel.pawel/rsAbsence}. A snapshot of the repository
at the time of writing is available at
\url{https://doi.org/10.5281/zenodo.XXXXXX}. We used the statistical programming
language R version \Sexpr{paste(version$major, version$minor, sep = ".")}
SamCH93's avatar
SamCH93 committed
\citep{R} for analyses. The R packages \texttt{ggplot2} \citep{Wickham2016},
\texttt{dplyr} \citep{Wickham2022}, \texttt{knitr} \citep{Xie2022}, and
\texttt{reporttools} \citep{Rufibach2009} were used for plotting, data
preparation, dynamic reporting, and formatting, respectively. The data from the
RPCB were obtained by downloading the files from
\url{https://github.com/mayamathur/rpcb} (commit a1e0c63) and extracting the
relevant variables as indicated in the R script \texttt{preprocess-rpcb-data.R}
which is available in our git repository.% The effect estimates and standard
% errors on SMD scale provided in this data set differ in some cases from those in
% the data set available at \url{https://doi.org/10.17605/osf.io/e5nvr}, which is
% cited in \citet{Errington2021}. We used this particular version of the data set
% because it was recommended to us by the RPCB statistician (Maya Mathur) upon
% request.
% For the \citet{Dawson2011} example study and its replication \citep{Shan2017},
% the sample sizes $n = 3$ in th data set seem to correspond to the group sample
% sizes, see Figure 5A in the replication study
% (\url{https://doi.org/10.7554/eLife.25306.012}), which is why we report the
% total sample sizes of $n = 6$ in Figure~\ref{fig:2examples}.
SamCH93's avatar
SamCH93 committed


\bibliography{bibliography}

SamCH93's avatar
SamCH93 committed

SamCH93's avatar
SamCH93 committed

<< "sessionInfo1", eval = Reproducibility, results = "asis" >>=
## print R sessionInfo to see system information and package versions
## used to compile the manuscript (set Reproducibility = FALSE, to not do that)
cat("\\newpage \\section*{Computational details}")
@

<< "sessionInfo2", echo = Reproducibility, results = Reproducibility >>=
cat(paste(Sys.time(), Sys.timezone(), "\n"))
SamCH93's avatar
SamCH93 committed
sessionInfo()
@

\end{document}