From 6a7491cd476287a01b06612e4ac664ffffe8044e Mon Sep 17 00:00:00 2001
From: Rachel Heyard <rachel.heyard@uzh.ch>
Date: Sun, 18 Jun 2023 10:45:07 +0200
Subject: [PATCH] final bit of polishing

---
 paper/rsabsence.Rnw | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/paper/rsabsence.Rnw b/paper/rsabsence.Rnw
index 79cee75..d169de2 100755
--- a/paper/rsabsence.Rnw
+++ b/paper/rsabsence.Rnw
@@ -248,7 +248,7 @@ hypothesis testing --- that can address the limitations of the non-significance
 criterion. We use the null results replicated in the RPCB to illustrate the
 problems of the non-significance criterion and how they can be addressed. We
 conclude the paper with practical recommendations for analyzing replication
-studies of original null results, including R code for applying the proposed
+studies of original null results, including simple R code for applying the proposed
 methods.
 
 << "data" >>=
@@ -298,7 +298,7 @@ conflevel <- 0.95
 Figure~\ref{fig:2examples} shows effect estimates on standardized mean
 difference (SMD) scale with \Sexpr{round(100*conflevel, 2)}\% confidence
 intervals from two RPCB study pairs. In both study pairs, the original and
-replications studies are ``null results'' and therefore meet the
+replication studies are ``null results'' and therefore meet the
 non-significance criterion for replication success (the two-sided
 \textit{p}-values are greater than 0.05 in both the original and the
 replication study). However, intuition would suggest that the conclusions in the
@@ -598,7 +598,7 @@ mean difference effect estimates with \Sexpr{round(conflevel*100, 2)}\%
 confidence intervals for all 15 effects which were treated as null results by
 the RPCB.\footnote{There are four original studies with null effects for which
   two or three ``internal'' replication studies were conducted, leading in total
-  to 20 replications of null effects. As in the RPCB main analysis
+  to 20 replications of null effects. As done in the RPCB main analysis
   \citep{Errington2021}, we aggregated their SMD estimates into a single SMD
   estimate with fixed-effect meta-analysis and recomputed the replication
   \textit{p}-value based on a normal approximation. For the original studies and
@@ -714,7 +714,7 @@ much different from one indicates absence of evidence for either hypothesis
 % the alternative over the null $\BF_{10}$. These have to be either interpreted
 % in opposite direction or can be reoriented by $\BF_{01} = 1/\BF_{10}$.}.
 A reasonable criterion for successful replication of a null result may hence be
-to require a Bayes factor larger than some level $\gamma > 1$ from both studies,
+to require both studies to report a Bayes factor larger than some level $\gamma > 1$,
 for example, $\gamma = 3$ or $\gamma = 10$ which are conventional levels for
 ``substantial'' and ``strong'' evidence, respectively \citep{Jeffreys1961}. In
 contrast to the non-significance criterion, this criterion provides a genuine
@@ -1099,7 +1099,7 @@ translate into margins of $\Delta = % \log(1.3)\sqrt{3}/\pi =
 \Sexpr{round(log(1.3)*sqrt(3)/pi, 2)}$ and $\Delta = % \log(1.25)\sqrt{3}/\pi =
 \Sexpr{round(log(1.25)*sqrt(3)/pi, 2)}$ on the SMD scale, respectively, using
 the $\text{SMD} = (\surd{3} / \pi) \log\text{OR}$ conversion \citep[p.
-233]{Cooper2019}. Similarly, for the Bayesian factor we specified a normal
+233]{Cooper2019}. Similarly, for the Bayes factor we specified a normal
 unit-information prior under the alternative while other normal priors with
 smaller/larger standard deviations could have been considered. Here, we
 therefore investigate the sensitivity of our conclusions with respect to these
-- 
GitLab