Newer
Older
%\usepackage[nomarkers]{endfloat} %%%%%%%%%
\usepackage{calc}
\usepackage{color}
\usepackage{amsmath,amsthm,amssymb}
\usepackage{graphicx}
\usepackage{float}
% Create new "listing" float
\newfloat{listing}{tbhp}{lst}%[section]
\floatname{listing}{Listing}
\newcommand{\listoflistings}{\listof{listing}{List of Listings}}
\floatstyle{plaintop}
\restylefloat{listing}
%\usepackage{multind}
\usepackage{booktabs}
\usepackage{enumerate}
\usepackage{todonotes}
% \usepackage{uarial}
% \renewcommand{\familydefault}{\sfdefault}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% to change %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% \usepackage{lineno}
%% \linenumber
\renewcommand{\baselinestretch}{1} %% 2
\usepackage{color,xcolor}
\definecolor{link}{HTML}{004C80}
\usepackage[labelfont=bf]{caption}
\usepackage[english]{babel}
\usepackage[
pdftex,
plainpages=false,
pdfpagelabels,
pagebackref=true,
colorlinks=true,
citecolor=link,
linkcolor=link
]{hyperref}
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
\hypersetup{colorlinks,urlcolor=link}
\usepackage{array, lipsum}
% \usepackage{datetime}
\usepackage[margin=2cm,textwidth=19cm]{geometry}
\usepackage{float}
\newcommand{\eg}{{e.\,g.\,}}
\newcommand{\ie}{{i.\,e.\,}}
\newcommand{\pkg}[1]{\textit{#1}}
\newcommand{\N}{\mathcal{N}}
\setlength\parindent{0pt}
%\newcommand{\todo}[1]{\textcolor{red}{#1}}
\makeatletter
\DeclareRobustCommand*\textsubscript[1]{%
\@textsubscript{\selectfont#1}}
\def\@textsubscript#1{%
{\m@th\ensuremath{_{\mbox{\fontsize\sf@size\z@#1}}}}}
\makeatother
\begin{document}
\begin{center}
{\noindent \LARGE \bf Simulation protocol:\\[2mm]
Comparison of confidence intervals summarizing the\\[2mm]
uncertainty of the combined estimate of a meta-analysis
}\\
\bigskip
{\noindent \Large Leonhard Held, Felix Hofmann
}\end{center}
\bigskip
\vspace*{.5cm}
For the present protocol is inspired by \citet{burt:etal:06} and \citet{morr:etal:19}.
The simulation is implemented in \texttt{simulate\_all.R}.
\tableofcontents
\newpage
\section{Aims and objectives}\label{ref:aims}
The aim of this simulation study is the comparison of confidence intervals
(CIs) summarizing the uncertainty of the combined estimate of a meta-analysis.
Specifically, we focus on CIs constructed using p-value functions that
implement the methods from \citet{edgington:72} and \citet{fisher:34}. The
underlying data sets are simulated as described in Section~\ref{sec:simproc}
and Section~\ref{sec:scenario}. The resulting intervals are then compared to CIs
constructed using the other methods listed in Section~\ref{sec:method} using the
measures defined in Section~\ref{sec:meas}.
\section{Simulation of the data sets} \label{sec:simproc}
\subsection{Allowance for failures}
We expect no failures, \ie, for all simulated data sets all type of CI methods
should lead to a valid CI and all valid CIs should lead to valid CI criteria.
If a failure occurs, we stop the simulation and investigate the reason for the
\subsection{Software to perform simulations}
The simulation study is performed using the statistical software R \citep{R}.
We save the output of \texttt{sessionInfo()} giving information on the used
version of R, packages, and platform with the simulation results.
We use the package \pkg{doRNG} with its default random number generator to
ensure that random numbers generated inside parallel for loops are independent
and reproducible.
\subsection{Scenarios to be investigated} \label{sec:scenario}
\begin{itemize}
\item Higgin's $I^2$ heterogeneity measure $\in \{0, 0.3, 0.6, 0.9\}$.
% \item We always use an additive heterogeneity model. \todo{Maybe remove this entirely?}
\item Number of studies summarized by the meta-analysis $k \in \{3, 5, 10, 20, 50\}$.
\item Publication bias is $\in \{\text{'none'}, \text{'moderate'}, \text{'strong'}\}$
following the terminology of \citet{henm:copa:10}.
\item The average study effect $\theta \in \{0.2, 0.5\}$.
%, and we set it to $\theta = 0.2$ to
%obtain a similar scenario as used in \citet{henm:copa:10}.
\item The distribution to draw the true study values $\delta_i$ is either
'Gaussian' or 't' with 4 degrees of freedom. The latter still has finite mean
and variance, but leads to more 'outliers'.
\item The sample size $n_i$ of the $i$-th study (number of patients per study)
is $n_i = 50$ (small study) except for 0, 1, or 2 studies where
$n_i=500$ (large study).
Note that \citet{IntHoutIoannidis} use a similar setup.
\subsection{Simulation details}
% For the \textbf{Additive heterogeneity model without publication bias}, the
% simulation of one meta-analysis dataset is performed as follows:
The simulation of one meta-analysis data set is performed as follows:
\item Compute the within-study variance
\begin{equation} \label{eq:eps2}
\epsilon^2 = \frac{2}{k} \sum\limits_{i=1}^k \frac{1}{n_i}.
\end{equation}
\item Compute the between-study variance
\begin{equation}\label{eq:eq1}
\tau^2 = \epsilon^2 \frac{I^2}{1-I^2}.
\end{equation}
\item For a trial $i$ of the meta-analysis with $k$ trials, $i = 1, \dots, k$:
\begin{enumerate}
\item Simulate the true effect size using the Gaussian model:
$\delta_i \sim \N(\theta, \tau^2)$ or using a Student-$t$ distribution
with 4 degrees of freedom such that the samples have mean $\theta$ and
variance $\tau^2$.
\item Simulate the effect estimates of each trial
$y_i \sim \N(\delta_i, \frac{2}{n_i})$.
\item Simulate the standard errors of the trial outcomes:
$\text{se}_i \sim \sqrt{\frac{\chi^2(2n_i-2)}{(n_i-1)n_i}}$.
\paragraph{Note: The marginal variance}\mbox{}\\
The marginal variance of this simulation procedure is
$\tau^2 + 2/n_i$, so follows the additive heterogeneity model as intended.
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
% For the \textbf{Multiplicative model without publication bias}, the simulation
% of one meta-analysis dataset is performed as follows:
% \begin{enumerate}
% \item Compute the within-study variance
% $\epsilon^2 = \frac{2}{k} \sum\limits_{i=1}^k \frac{1}{n_i}$.
% \item Compute the multiplicative heterogeneity factor
% $\phi = \frac{1}{1-I^2}$. Compute the corresponding
% \begin{equation}\label{eq:eq2}
% \tau^2 = \epsilon^2 \, (\phi-1) .
% \end{equation}
% \item For a trial $i$ of the meta-analysis with $k$ trials, $i = 1, \dots, k$:
% \begin{enumerate}
% \item Simulate the true effect size using the Gaussian model:
% $\delta_i \sim \N(\theta, \tau^2)$ or using a Student-$t$ distribution
% such that the samples have mean $\theta$ and variance $\tau^2$.
% \item Simulate the effect estimates of each trial
% $y_i \sim \N(\delta_i, \frac{2}{n_i})$.
% \item Simulate the standard errors of the trial outcomes:
% $\text{se}_i \sim \sqrt{\frac{\chi^2(2n_i-2)}{(n_i-1)n_i}}$.
% \end{enumerate}
% \end{enumerate}
%
%
% \paragraph{Note: The marginal variance}\mbox{}\\
% The marginal variance of this simulation procedure is
% $\frac{2}{n} \, (\phi-1) + \frac{2}{n} = \frac{2\phi}{n} = \phi \, \epsilon^2$,
% so follows the multiplicative model as intended.
\paragraph{Note: Publication bias}\mbox{}\\
To simulate studies under \textbf{publication bias}, we follow the suggestion
of \citet{henm:copa:10} and accept each simulated study with probability
\begin{equation} \label{eq:pbias}
\exp(-4\, \Phi(-y_i / \text{se}_i)^\gamma ),
\end{equation}
where $\gamma = 3$ and $\gamma = 1.5$ correspond to \emph{moderate} and
\emph{strong} publication bias, respectively.
This is, accepted studies are kept and for a rejected study we replace $y_i$
and $\text{se}_i$ by newly simulated values, which are then again accepted
with the given probability above. This procedure is repeated until the required
number of studies is simulated.
To obtain a similar scenario as in \citet{henm:copa:10} we set
$$
\theta / \sqrt{2/n_i} \overset{!}{=} 1 \Rightarrow \theta = \sqrt{2/n_i}
$$
However, we assume that only small studies with $n_i = 50$ are subject to
publication bias. Thus, larger studies with $n_i = 500$ are always accepted.
As described in Section~\ref{sec:scenario}, we set $\theta \in \{0.2, 0.5\}$. See the R function \texttt{simREbias()}.
% The mean study effect $\theta$ and the sample size $n_i$ have an influence
% on the acceptance probability
% \paragraph{Note: Unbalanced sample sizes}\mbox{}\\
% To study the effect of unbalanced sample sizes $n_1, \ldots, n_k$ we consider
% the following setup:
% \begin{enumerate}
% \item Increase the sample size of \textbf{one} of the $k$ by a factor 10.
% \item Increase the sample size of \textbf{two} of the $k$ by a factor 10.
% \end{enumerate}
%% See the argument \texttt{large} of \texttt{simREbias()}.
\subsection{Simulation procedure}
For each scenario in Section~\ref{sec:scenario} we
\begin{enumerate}
\item compute the CIs listed in Section~\ref{sec:method} for each meta-analysis
\item summarize the performance of the CIs by the criteria listed in
Section~\ref{sec:meas}
\end{enumerate}
\section{Analysis of the confidence intervals}
This section contains an overview over the construction methods for CIs
that we consider in this simulation. Moreover, we explain what measures we
use in order to compare the different CIs with each other.
\subsection{Construction methods for confidence intervals} \label{sec:method}
For this project, we will calculate 95\% CIs according to the following methods.
\begin{enumerate}
\item Hartung Knapp Sidik Jonkman (HK) \citep{IntHoutIoannidis}.
\item Random effects model.
\item Henmi and Copas (HC) \citep{henm:copa:10}.
\item Bayesian random effects meta analysis (Bayesmeta) with half-normal prior
distribution with $\sigma = 0.3$ \citep{rov:20, }.
\todo{Insert citation for Lilienthal et al. for $\sigma = 0.3$?}
\item Edgington's method \citep{edgington:72}.
\item Fisher's method \citep{fisher:34}.
\subsection{Definition of the variance adjustments} \label{sec:varadj}
As we assume an additive heterogeneity model, we will calculate the confidence
intervals for methods \emph{Fisher}, \emph{Edgington}, and \emph{Random effects}
based on the following estimators for the between-study variance $\tau^2$. The
estimator acts thus as an additional scenario that is only applied to the above
mentioned methods.
\begin{enumerate}
\item No heterogeneity, \ie $\tau^2 = 0$.
\item DerSimonian-Laird \citep{ders:lair:86}.
\item Paule-Mandel.
\item REML.
\end{enumerate}
\todo{Add citation for these estimators?}
The calculation of the estimates in the simulation will be done using the
\texttt{metagen} function from the \texttt{R} package ``\emph{meta}''.
$\text{se}_{\text{adj}}(\hat{\theta_i}) = \sqrt{\text{se}(\hat{\theta_i})^2 + \tau^2}$.
% As stated in Subsection~\ref{sec:method}, the harmonic mean and $k$-trials
% methods can be extended such that heterogeneity between the individual studies
% is taken into account. In scenarios where the additive variance adjustment is
% used, we estimate the between study variance $\tau^2$ using the REML method
% implemented in the \texttt{metagen} \texttt{R}-package ``meta'' and adjust
% the study-specific standard errors such that
% $\text{se}_{\text{adj}}(\hat{\theta_i}) = \sqrt{\text{se}(\hat{\theta_i})^2 + \tau^2}$.
% In case of the multiplicative variance adjustment, we estimate the
% multiplicative parameter $\phi$ as described in \citet{mawd:etal:17} and adjust
% the study-specific standard errors such that
% $\text{se}_{\text{adj}}(\hat{\theta_i}) = \text{se}(\hat{\theta_i}) \cdot \sqrt{\phi}$.
\subsection{Measures considered} \label{sec:meas}
We assess the CIs using the following criteria
\begin{enumerate}
\item CI coverage of combined effect, \ie, the proportion of intervals
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
containing the true effect. If the CI does not exist given a specific
simulated data set, we treat the coverage as as missing (\texttt{NA}).
% coverage_true
% \item CI coverage of study effects, \ie, the proportion of intervals
% containing the true study-specific effects % coverage_effects
% \item CI coverage of all study effects, \ie, whether or not the CI covers
% all of the study effects %coverage_all
% \item CI coverage of at least one of the study effects, \ie, whether or not
% the CI covers at least one of the study effects % coverage_effects_min1
% \item Prediction Interval (PI) coverage, \ie, the proportion of intervals
% containing the treatment effect of a newly simulated study. The newly
% simulated study has $n = 50$ and is not subject to publication bias. All
% other simulation parameters stay the same as for the simulation of the
% original studies (only for Harmonic mean, $k$-trials, REML, and HK methods)
% % coverage_prediction
\item CI width. If there is more than one interval, the width is the sum of
the lengths of the individual intervals. If the interval does not exist for
a simulated data set, the width will be recorded as missing (\texttt{NA}).
%width
\item Interval score \citep{Gnei:Raft:07}. If the interval does not exist for
a simulated data set, the score will be recorded as missing (\texttt{NA}).
% score
\item Number of CIs (only for Fisher and Edgington methods). If the interval
does not exist for a simulated data set, the number of CIs will be recorded as
0. % n
\end{enumerate}
Furthermore, we calculate the following measure related to the point estimates
\begin{enumerate}
\item Mean squared error (MSE).
For the Edgington and Fisher methods, we also investigate the
distribution of the highest value of the $p$-value function between the lowest
and the highest treatment effect of the simulated studies. In order to do so,
we calculate the following measures:
\begin{itemize}
\item Minimum
\item First quartile
\item Mean
\item Median
\item Third quartile
\item Maximum
\end{itemize}
\vspace*{.5cm}
As both methods can result in more than one CI for a given meta-analysis,
we record the relative frequency of the number of intervals $m$ over the
10'000 iterations for each of the different scenarios mentioned in
Section~\ref{sec:scenario}. However, we truncate the distribution
by summarising all events where the number of intervals is $> 9$.
\section{
Estimates to be stored for each simulation and summary measures to
be calculated over all simulations
}
For each simulated meta-analysis we construct CIs according to all methods
(Section~\ref{sec:method}) and calculate all available assessments
(Section~\ref{sec:meas}) for the respective method. For assessments 1-3 in
Subsection~\ref{sec:meas} we only store the mean value of all the 10'000
iterations in a specific scenario. Possible missing values (\texttt{NA}) are
removed before calculating the mean value. Regarding the distribution of the
highest value of the $p$-value function, we store the summary measures mentioned
in the respective paragraph of Subsection~\ref{sec:meas}. We calculate the
relative frequencies of the number of intervals $m=1, 2, \ldots, 9, >9$ in each
confidence set over the 10'000 iterations of the same scenario.
\section{Presentation of the simulation results}
For each of the performance measures 1-3 in Subsection~\ref{sec:meas} as well as
the mean squared error (MSE) we construct plots with
\begin{itemize}
\item the number of studies $k$ on the $x$-axis
\item the performance measure on the $y$-axis
\item one connecting line and color for each value of $I^2$
\item one panel for each CI method
\end{itemize}
Regarding the distribution of the $p$-value function for the harmonic mean
and $k$-trials methods, we will create plots that contain
\begin{itemize}
\item the number of studies $k$ on the $x$-axis
\item the value of the summary statistic on the $y$-axis
\item one connecting line and color for each summary statistic
\item one panel for each CI method
\end{itemize}
The plots for the relative frequencies of the number of intervals have
\begin{itemize}
\item the category ($1$ to $9$ and $>9$) indicating the number of intervals
$n$ on the $x$-axis
\item the relative frequency on the $y$-axis
\item a bar for each category indicating the relative frequency for the
respective category