[Gsdesign-commits] r158 - pkg/tex/tmphelp/Rd

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Fri May 22 23:27:11 CEST 2009


Author: keaven
Date: 2009-05-22 23:27:11 +0200 (Fri, 22 May 2009)
New Revision: 158

Added:
   pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.tex
   pkg/tex/tmphelp/Rd/binomial.tex
   pkg/tex/tmphelp/Rd/gsBoundCP.tex
   pkg/tex/tmphelp/Rd/gsCP.tex
   pkg/tex/tmphelp/Rd/gsDesign-package.tex
   pkg/tex/tmphelp/Rd/gsDesign.tex
   pkg/tex/tmphelp/Rd/gsProbability.tex
   pkg/tex/tmphelp/Rd/gsbound.tex
   pkg/tex/tmphelp/Rd/nSurvival.tex
   pkg/tex/tmphelp/Rd/normalGrid.tex
   pkg/tex/tmphelp/Rd/plot.gsDesign.tex
   pkg/tex/tmphelp/Rd/sfHSD.tex
   pkg/tex/tmphelp/Rd/sfLDPocock.tex
   pkg/tex/tmphelp/Rd/sfTDist.tex
   pkg/tex/tmphelp/Rd/sfexp.tex
   pkg/tex/tmphelp/Rd/sflogistic.tex
   pkg/tex/tmphelp/Rd/sfpoints.tex
   pkg/tex/tmphelp/Rd/sfpower.tex
   pkg/tex/tmphelp/Rd/spendingfunctions.tex
   pkg/tex/tmphelp/Rd/testutils.tex
Removed:
   pkg/tex/tmphelp/Rd/FarrMannSS.Rd
   pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.Rd
   pkg/tex/tmphelp/Rd/binomial.Rd
   pkg/tex/tmphelp/Rd/checkScalar.Rd
   pkg/tex/tmphelp/Rd/gsBoundCP.Rd
   pkg/tex/tmphelp/Rd/gsCP.Rd
   pkg/tex/tmphelp/Rd/gsDesign-package.Rd
   pkg/tex/tmphelp/Rd/gsDesign.Rd
   pkg/tex/tmphelp/Rd/gsProbability.Rd
   pkg/tex/tmphelp/Rd/gsbound.Rd
   pkg/tex/tmphelp/Rd/nSurvival.Rd
   pkg/tex/tmphelp/Rd/normalGrid.Rd
   pkg/tex/tmphelp/Rd/plot.gsDesign.Rd
   pkg/tex/tmphelp/Rd/sfHSD.Rd
   pkg/tex/tmphelp/Rd/sfLDPocock.Rd
   pkg/tex/tmphelp/Rd/sfTDist.Rd
   pkg/tex/tmphelp/Rd/sfexp.Rd
   pkg/tex/tmphelp/Rd/sflogisitic.Rd
   pkg/tex/tmphelp/Rd/sflogistic.Rd
   pkg/tex/tmphelp/Rd/sfpoints.Rd
   pkg/tex/tmphelp/Rd/sfpower.Rd
   pkg/tex/tmphelp/Rd/spendingfunctions.Rd
Log:
v 2 manual completion

Deleted: pkg/tex/tmphelp/Rd/FarrMannSS.Rd
===================================================================
--- pkg/tex/tmphelp/Rd/FarrMannSS.Rd	2009-05-22 21:22:29 UTC (rev 157)
+++ pkg/tex/tmphelp/Rd/FarrMannSS.Rd	2009-05-22 21:27:11 UTC (rev 158)
@@ -1,207 +0,0 @@
-\name{nBinomial}
-\alias{testBinomial}
-\alias{ciBinomial}
-\alias{nBinomial}
-\alias{simBinomial}
-\title{3.2: Testing, Confidence Intervals and Sample Size for Comparing Two Binomial Rates}
-\description{Support is provided for sample size estimation, testing confidence intervals and simulation for fixed sample size trials 
-(i.e., not group sequential or adaptive) with two arms and binary outcomes. 
-Both superiority and non-inferiority trials are considered.
-While all routines default to comparisons of risk-difference, 
-computations based on risk-ratio and odds-ratio are also provided. 
-
-\code{nBinomial()} computes sample size using the method of Farrington and Manning (1990) to derive sample size
-required to power a trial to test the difference between two binomial event rates. 
-The routine can be used for a test of superiority or non-inferiority.
-For a design that tests for superiority \code{nBinomial()} is consistent with the method of Fleiss, Tytun, and Ury 
-(but without the continuity correction) to test for differences between event rates.
-This routine is consistent with the Hmisc package routine bsamsize for superiority designs.
-Vector arguments allow computing sample sizes for multiple scenarios for comparative purposes.
-
-\code{testBinomial()} computes a Z- or Chi-square-statistic that compares two binomial event rates using 
-the method of Miettinen and Nurminen (1980). This can be used for superiority or non-inferiority testing.
-Vector arguments allow easy incorporation into simulation routines for fixed, group sequential and adaptive designs.
-
-\code{ciBinomial()} computes confidence intervals for 1) the difference between two rates, 2) the risk-ratio for two rates 
-or the odds-ratio for two rates. This procedure provides inference that is consistent with \code{testBinomial} in that 
-the confidence intervals are produced by inverting the testing procedures in \code{testBinomial}.
-
-\code{simBinomial()} performs simulations to estimate the power for a Miettinin and Nurminen (1980) test
-comparing two binomial rates for superiority or non-inferiority. 
-As noted in documentation for \code{bpower.sim()}, by using \code{testBinomial()} you can see that the formulas 
-without any continuity correction are quite accurate. 
-In fact, Type I error for a continuity-corrected test is significantly lower (Gordon and Watson, 1996)
-than the nominal rate. 
-Thus, as a default no continuity corrections are performed.
-}
-
-\usage{
-nBinomial(p1, p2, alpha=.025, beta=0.1, delta0=0, ratio=1, sided=1, outtype=1,
-          scale="Difference") 
-testBinomial(x1, x2, n1, n2, delta0=0, chisq=0, adj=0,
-             scale="Difference", tol=.1e-10)
-ciBinomial(x1, x2, n1, n2, alpha=.05, adj=0, scale="Difference")
-simBinomial(p1, p2, n1, n2, delta0=0, nsim=10000, chisq=0, adj=0,
-            scale="Difference")
-}
-\arguments{
-For \code{simBinomial()} all arguments must have length 1.
-In general, arguments for \code{nBinomial()}, \code{testBinomial()} and \code{ciBinomial} may be scalars or vectors, 
-in which case they return a vector of sample sizes and powers, respectively.
-There can be a mix of scalar and vector arguments. 
-All arguments specified using vectors must have the same length.  
-
-\item{p1}{event rate in group 1 under the alternative hypothesis}  
-\item{p2}{event rate in group 2 under the alternative hypothesis}  
-\item{alpha}{type I error; see \code{sided} below to distinguish between 1- and 2-sided tests}
-\item{beta}{type II error}
-\item{delta0}{A value of 0 (the default) always represents no difference between treatment groups under the null hypothesis.
-\code{delta0} is interpreted differently depending on the value of the parameter \code{scale}. 
-If \code{scale="Difference"} (the default), 
-\code{delta0} is the difference in event rates under the null hypothesis.
-If \code{scale="RR"}, 
-\code{delta0} is the relative risk of event rates under the null hypothesis minus 1 (\code{p2 / p1 - 1}).
-If \code{scale="LNOR"}, 
-\code{delta0} is the difference in natural logarithm of the odds-ratio under the null hypothesis
-\code{log(p2 / (1 - p2)) - log(p1 / (1 - p1))}.
-}
-\item{ratio}{sample size ratio for group 2 divided by group 1}
-\item{sided}{2 for 2-sided test, 1 for 1-sided test}
-\item{outtype}{1 (default) returns total sample size; 2 returns sample size for each group (\code{n1, n2}); 
-}
-\item{x1}{Number of "successes" in the control group}
-\item{x2}{Number of "successes" in the experimental group}
-\item{n1}{Number of observations in the control group}
-\item{n2}{Number of observations in the experimental group}
-\item{chisq}{An indicator character string (vector of character strings
-If 0 (default), the difference in event rates divided by its standard error under the null hypothesis is used. 
-Otherwise, a Miettinen and Nurminen chi-square statistic for a 2 x 2 table is used.}
-\item{adj}{With \code{adj=1}, the standard variance with a continuity correction is used for a Miettinen and Nurminen test statistic 
-This includes a factor of n / (n - 1) where n is the total sample size. If \code{adj} is not 1, 
-this factor is not applied. The default is \code{adj=0} since nominal Type I error is generally conservative with \code{adj=1}
-(Gordon and Watson, 1996).}
-\item{scale}{"Difference", "RR", "OR"; see the \code{scale} parameter documentation above and Details. 
-This is a scalar argument.}
-\item{nsim}{The number of simulations to be performed in \code{simBinomial()}}
-\item{tol}{Default should probably be used; this is used to deal with a rounding issue in interim calculations}
-}
-
-\references{
-Farrington, CP and Manning, G (1990), Test statistics and sample size formulae for comparative binomial trials with null hypothesis
-of non-zero risk difference or non-unity relative risk. \emph{Statistics in Medicine};9:1447-1454.
-
-Fleiss, JL, Tytun, A and Ury (1980), A simple approximation for calculating sample sizes for comparing independent proportions.
-\emph{Biometrics};36:343-346.
-
-Gordon, I and Watson R (1985), The myth of continuity-corrected sample size formulae. \emph{Biometrics};52:71-76.
-
-Miettinin, O and Nurminen, M (1980), Comparative analysis of two rates. \emph{Statistics in Medicine};4:213-226.
-}
-
-\details{
-Testing is 2-sided when a Chi-square statistic is used and 1-sided when a Z-statistic is used.
-Thus, these 2 options will produce substantially different results, in general.
-For non-inferiority, 1-sided testing is appropriate.
-
-You may wish to round sample sizes up using \code{ceiling()}.
-
-Farrington and Manning (1990) begin with event rates \code{p1} and \code{p2} under the alternative hypothesis
-and a difference between these rates under the null hypothesis, \code{delta0}.
-From these values, actual rates under the null hypothesis are computed, which are labeled \code{p10} and \code{p20}
-when \code{outtype=3}.
-The rates \code{p1} and \code{p2} are used to compute a variance for a Z-test comparing rates under the alternative hypothesis,
-which \code{p10} and \code{p20} are used under the null hypothesis.  
-}
-
-\value{
-  \code{testBinomial} and \code{simBinomial} each return a vector of either Chi-square or Z test statistics. 
-  These may be compared to an appropriate cutoff point (e.g., \code{qnorm(.975)} or \code{qchisq(.95,1)}).
- 
-  With the default \code{outtype=2}, \code{nBinomial()} returns a list containing two vectors \code{n1} and \code{n2} containing
-  sample sizes for groups 1 and 2, respectively.
-  With \code{outtype=1}, a vector of total sample sizes is returned.
-  With \code{outtype=3}, \code{nBinomial()} returns a list as follows:
-  \item{n}{A vector with total samples size required for each event rate comparison specified}
-  \item{n1}{A vector of sample sizes for group 1 for each event rate comparison specified}
-  \item{n2}{A vector of sample sizes for group 2 for each event rate comparison specified}
-  \item{sigma0}{A vector containing the variance of the treatment effect difference under the null hypothesis}
-  \item{sigma1}{A vector containing the variance of the treatment effect difference under the alternative hypothesis}
-  \item{p1}{As input}
-  \item{p2}{As input}
-  \item{pbar}{Returned only for superiority testing (\code{\delta0=0}), the weighted average of \code{p1} and \code{p2} using weights
-  \code{n1} and \code{n2}}
-
-  When \code{delta0=0}, instead of \code{pbar}, the following 2 vectors are returned (see details):
-  \item{p10}{group 1 treatment effect used for null hypothesis}
-  \item{p20}{group 2 treatment effect used for null hypothesis}
-
-}
-
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}}
-
-\examples{
-# Compute z-test test statistic comparing 39/500 to 13/500
-# use continuity correction in variance
-x <- testBinomial(x1=39, x2=13, n1=500, n2=500, adj=1)
-x
-pnorm(x, lower.tail=FALSE)
-
-# Compute with unadjusted variance
-x0 <- testBinomial(x1=39, x2=23, n1=500, n2=500)
-x0
-pnorm(x0, lower.tail=FALSE)
-
-# Perform 500k simulations to test validity of the above asymptotic p-values 
-sum(as.real(x0) <= simBinomial(p1=.078, p2=.078, n1=500, n2=500, nsim=500000)) / 500000
-sum(as.real(x0) <= simBinomial(p1=.052, p2=.052, n1=500, n2=500, nsim=500000)) / 500000
-
-# Perform a non-inferiority test to see if p2=400 / 500 is within 5% of 
-# p1=410 / 500 use a z-statistic with unadjusted variance
-x <- testBinomial(x1=410, x2=400, n1=500, n2=500, delta0= -.05)
-x
-pnorm(x, lower.tail=FALSE)
-
-# since chi-square tests equivalence (a 2-sided test) rather than non-inferiority (a 1-sided test), 
-# the result is quite different
-pchisq(testBinomial(x1=410, x2=400, n1=500, n2=500, delta0= -.05, chisq=1, 
-                    adj=1), 1, lower.tail=FALSE)
-
-# now simulate the z-statistic witthout continuity corrected variance
-sum(qnorm(.975) <= simBinomial(p1=.8, p2=.8, n1=500, n2=500, nsim=1000000)) / 1000000
-
-# compute a sample size to show non-inferiority with 5% margin, 90% power
-nBinomial(p1=.2, p2=.2, delta0=.05, alpha=.025, sided=1, beta=.1)
-
-# assuming a slight advantage in the experimental group lowers sample size requirement
-nBinomial(p1=.2, p2=.19, delta0=.05, alpha=.025, sided=1, beta=.1)
-
-# compute a sample size for comparing 15\% vs 10\% event rates with 1 to 2 randomization
-nBinomial(p1=.15, p2=.1, beta=.2, ratio=2, alpha=.05)
-
-# now look at total sample size using 1-1 randomization
-nBinomial(p1=.15, p2=.1, beta=.2, alpha=.05)
-
-# look at power plot under different control event rate and
-# relative risk reductions
-p1 <- seq(.075, .2, .000625)
-p2 <- p1 * 2 / 3
-y1 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .75
-y2 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .6
-y3 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .5
-y4 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-plot(p1, y1, type="l", ylab="Sample size", xlab="Control group event rate",
-     ylim=c(0, 6000), lwd=2)
-title(main="Binomial sample size computation for 80 pct power")
-lines(p1, y2, lty=2, lwd=2)
-lines(p1, y3, lty=3, lwd=2)
-lines(p1, y4, lty=4, lwd=2)
-legend(x=c(.15, .2),y=c(4500, 6000),lty=c(2, 1, 3, 4), lwd=2,
-       legend=c("25 pct reduction", "33 pct reduction", "40 pct reduction",
-                "50 pct reduction"))
-}
-
-\keyword{design}
-

Deleted: pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.Rd
===================================================================
--- pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.Rd	2009-05-22 21:22:29 UTC (rev 157)
+++ pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.Rd	2009-05-22 21:27:11 UTC (rev 158)
@@ -1,62 +0,0 @@
-\name{Wang-Tsiatis Bounds}
-\alias{O'Brien-Fleming Bounds}
-\alias{Wang-Tsiatis Bounds}
-\alias{Pocock Bounds}
-\docType{package}
-\title{5.0: Wang-Tsiatis Bounds}
-\description{\code{gsDesign} offers the option of using Wang-Tsiatis bounds as an alternative to 
-the spending function approach to group sequential design.
-Wang-Tsiatis bounds include both Pocock and O'Brien-Fleming designs.
-Wang-Tsiatis bounds are currently only available for 1-sided and symmetric 2-sided designs.
-Wang-Tsiatis bounds are typically used with equally spaced timing between analyses, but
-the option is available to use them with unequal spacing.
-}
-\details{
-Wang-Tsiatis bounds are defined as follows.
-Assume \eqn{k} analyses and let \eqn{Z_i} represent the upper bound and \eqn{t_i} the proportion of the
-total planned sample size for the \eqn{i}-th analysis, 
-\eqn{i=1,2,\ldots,k}.
-Let \eqn{\Delta}{Delta} be a real-value. 
-Typically \eqn{\Delta}{Delta} will range from 0 (O'Brien-Fleming design) to 0.5 (Pocock design).
-The upper boundary is defined by 
-\deqn{ct_i^{\Delta-0.5}}
-for \eqn{i= 1,2,\ldots,k} where \eqn{c} depends on the other parameters.
-The parameter \eqn{\Delta}{Delta} is supplied to \code{gsDesign()} in the parameter \code{sfupar}.
-For O'Brien-Fleming and Pocock designs there is also a calling sequence that does not require a parameter.
-See examples.}
-
-\seealso{\code{\link{Spending Functions}, \link{gsDesign}}, \code{\link{gsProbability}}}
-\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/manual.pdf
-in the directory where R is installed.}
-
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang}
-\references{
-Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
-Boca Raton: Chapman and Hall.
-} 
-\examples{
-# Pocock design
-gsDesign(test.type=2, sfu="Pocock")
-
-# alternate call to get Pocock design specified using Wang-Tsiatis option and Delta=0.5
-gsDesign(test.type=2, sfu="WT", sfupar=0.5)
-
-# this is how this might work with a spending function approach
-# Hwang-Shih-DeCani spending function with gamma=1 is often used to approximate Pocock design
-gsDesign(test.type=2, sfu=sfHSD, sfupar=1)
-
-# unequal spacing works,  but may not be desirable 
-gsDesign(test.type=2, sfu="Pocock", timing=c(.1, .2))
-
-# spending function approximation to Pocock with unequal spacing 
-# is quite different from this
-gsDesign(test.type=2, sfu=sfHSD, sfupar=1, timing=c(.1, .2))
-
-# One-sided O'Brien-Fleming design
-gsDesign(test.type=1, sfu="OF")
-
-# alternate call to get O'Brien-Fleming design specified using Wang-Tsiatis option and Delta=0
-gsDesign(test.type=1, sfu="WT", sfupar=0)
-}
-
-\keyword{design}

Added: pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.tex
===================================================================
--- pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.tex	                        (rev 0)
+++ pkg/tex/tmphelp/Rd/Wang-Tsiatis-bounds.tex	2009-05-22 21:27:11 UTC (rev 158)
@@ -0,0 +1,70 @@
+\HeaderA{Wang-Tsiatis Bounds}{5.0: Wang-Tsiatis Bounds}{Wang.Rdash.Tsiatis Bounds}
+\aliasA{O'Brien-Fleming Bounds}{Wang-Tsiatis Bounds}{O'Brien.Rdash.Fleming Bounds}
+\aliasA{Pocock Bounds}{Wang-Tsiatis Bounds}{Pocock Bounds}
+\keyword{design}{Wang-Tsiatis Bounds}
+\begin{Description}\relax
+\code{gsDesign} offers the option of using Wang-Tsiatis bounds as an alternative to 
+the spending function approach to group sequential design.
+Wang-Tsiatis bounds include both Pocock and O'Brien-Fleming designs.
+Wang-Tsiatis bounds are currently only available for 1-sided and symmetric 2-sided designs.
+Wang-Tsiatis bounds are typically used with equally spaced timing between analyses, but
+the option is available to use them with unequal spacing.
+\end{Description}
+\begin{Details}\relax
+Wang-Tsiatis bounds are defined as follows.
+Assume \eqn{k}{} analyses and let \eqn{Z_i}{} represent the upper bound and \eqn{t_i}{} the proportion of the
+total planned sample size for the \eqn{i}{}-th analysis, 
+\eqn{i=1,2,\ldots,k}{}.
+Let \eqn{\Delta}{Delta} be a real-value. 
+Typically \eqn{\Delta}{Delta} will range from 0 (O'Brien-Fleming design) to 0.5 (Pocock design).
+The upper boundary is defined by 
+\deqn{ct_i^{\Delta-0.5}}{}
+for \eqn{i= 1,2,\ldots,k}{} where \eqn{c}{} depends on the other parameters.
+The parameter \eqn{\Delta}{Delta} is supplied to \code{gsDesign()} in the parameter \code{sfupar}.
+For O'Brien-Fleming and Pocock designs there is also a calling sequence that does not require a parameter.
+See examples.
+\end{Details}
+\begin{Note}\relax
+The manual is not linked to this help file, but is available in library/gsdesign/doc/gsDesignManual.pdf
+in the directory where R is installed.
+\end{Note}
+\begin{Author}\relax
+Keaven Anderson \email{keaven\_anderson at merck.}
+\end{Author}
+\begin{References}\relax
+Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
+Boca Raton: Chapman and Hall.
+\end{References}
+\begin{SeeAlso}\relax
+\code{\LinkA{Spending function overview}{Spending function overview}, \LinkA{Spending function overview}{Spending function overview}}, \code{\LinkA{gsProbability}{gsProbability}}
+\end{SeeAlso}
+\begin{Examples}
+\begin{ExampleCode}
+# Pocock design
+gsDesign(test.type=2, sfu="Pocock")
+
+# alternate call to get Pocock design specified using 
+# Wang-Tsiatis option and Delta=0.5
+gsDesign(test.type=2, sfu="WT", sfupar=0.5)
+
+# this is how this might work with a spending function approach
+# Hwang-Shih-DeCani spending function with gamma=1 is often used 
+# to approximate Pocock design
+gsDesign(test.type=2, sfu=sfHSD, sfupar=1)
+
+# unequal spacing works,  but may not be desirable 
+gsDesign(test.type=2, sfu="Pocock", timing=c(.1, .2))
+
+# spending function approximation to Pocock with unequal spacing 
+# is quite different from this
+gsDesign(test.type=2, sfu=sfHSD, sfupar=1, timing=c(.1, .2))
+
+# One-sided O'Brien-Fleming design
+gsDesign(test.type=1, sfu="OF")
+
+# alternate call to get O'Brien-Fleming design specified using 
+# Wang-Tsiatis option and Delta=0
+gsDesign(test.type=1, sfu="WT", sfupar=0)
+\end{ExampleCode}
+\end{Examples}
+

Deleted: pkg/tex/tmphelp/Rd/binomial.Rd
===================================================================
--- pkg/tex/tmphelp/Rd/binomial.Rd	2009-05-22 21:22:29 UTC (rev 157)
+++ pkg/tex/tmphelp/Rd/binomial.Rd	2009-05-22 21:27:11 UTC (rev 158)
@@ -1,206 +0,0 @@
-\name{nBinomial}
-\alias{testBinomial}
-\alias{ciBinomial}
-\alias{nBinomial}
-\alias{simBinomial}
-\title{3.2: Testing, Confidence Intervals and Sample Size for Comparing Two Binomial Rates}
-\description{Support is provided for sample size estimation, testing confidence intervals and simulation for fixed sample size trials 
-(that is, not group sequential or adaptive) with two arms and binary outcomes. 
-Both superiority and non-inferiority trials are considered.
-While all routines default to comparisons of risk-difference, 
-computations based on risk-ratio and odds-ratio are also provided. 
-
-\code{nBinomial()} computes sample size using the method of Farrington and 
-Manning (1990) to derive sample size required to power a trial to test the difference between two binomial event rates. 
-The routine can be used for a test of superiority or non-inferiority.
-For a design that tests for superiority \code{nBinomial()} is consistent with the method of Fleiss, Tytun, and Ury 
-(but without the continuity correction) to test for differences between event rates.
-This routine is consistent with the Hmisc package routine \code{bsamsize} for superiority designs.
-Vector arguments allow computing sample sizes for multiple scenarios for comparative purposes.
-
-\code{testBinomial()} computes a Z- or Chi-square-statistic that compares two binomial event rates using 
-the method of Miettinen and Nurminen (1980). This can be used for superiority or non-inferiority testing.
-Vector arguments allow easy incorporation into simulation routines for fixed, group sequential and adaptive designs.
-
-\code{ciBinomial()} computes confidence intervals for 1) the difference between two rates, 2) the risk-ratio for two rates 
-or the odds-ratio for two rates. This procedure provides inference that is consistent with \code{testBinomial()} in that 
-the confidence intervals are produced by inverting the testing procedures in \code{testBinomial()}.
-
-\code{simBinomial()} performs simulations to estimate the power for a Miettinin and Nurminen (1980) test
-comparing two binomial rates for superiority or non-inferiority. 
-As noted in documentation for \code{bpower.sim()}, by using \code{testBinomial()} you can see that the formulas 
-without any continuity correction are quite accurate. 
-In fact, Type I error for a continuity-corrected test is significantly lower (Gordon and Watson, 1996)
-than the nominal rate. 
-Thus, as a default no continuity corrections are performed.
-}
-
-\usage{
-nBinomial(p1, p2, alpha=.025, beta=0.1, delta0=0, ratio=1, sided=1, outtype=1,
-          scale="Difference") 
-testBinomial(x1, x2, n1, n2, delta0=0, chisq=0, adj=0,
-             scale="Difference", tol=.1e-10)
-ciBinomial(x1, x2, n1, n2, alpha=.05, adj=0, scale="Difference")
-simBinomial(p1, p2, n1, n2, delta0=0, nsim=10000, chisq=0, adj=0,
-            scale="Difference")
-}
-\arguments{
-For \code{simBinomial()} all arguments must have length 1.
-In general, arguments for \code{nBinomial()}, \code{testBinomial()} and \code{ciBinomial} may be scalars or vectors, 
-in which case they return a vector of sample sizes and powers, respectively.
-There can be a mix of scalar and vector arguments. 
-All arguments specified using vectors must have the same length.  
-
-\item{p1}{event rate in group 1 under the alternative hypothesis}  
-\item{p2}{event rate in group 2 under the alternative hypothesis}  
-\item{alpha}{type I error; see \code{sided} below to distinguish between 1- and 2-sided tests}
-\item{beta}{type II error}
-\item{delta0}{A value of 0 (the default) always represents no difference between treatment groups under the null hypothesis.
-\code{delta0} is interpreted differently depending on the value of the parameter \code{scale}. 
-If \code{scale="Difference"} (the default), 
-\code{delta0} is the difference in event rates under the null hypothesis.
-If \code{scale="RR"}, 
-\code{delta0} is the relative risk of event rates under the null hypothesis minus 1 (\code{p2 / p1 - 1}).
-If \code{scale="LNOR"}, 
-\code{delta0} is the difference in natural logarithm of the odds-ratio under the null hypothesis
-\code{log(p2 / (1 - p2)) - log(p1 / (1 - p1))}.
-}
-\item{ratio}{sample size ratio for group 2 divided by group 1}
-\item{sided}{2 for 2-sided test, 1 for 1-sided test}
-\item{outtype}{1 (default) returns total sample size; 2 returns sample size for each group (\code{n1, n2}); 
-}
-\item{x1}{Number of \dQuote{successes} in the control group}
-\item{x2}{Number of \dQuote{successes} in the experimental group}
-\item{n1}{Number of observations in the control group}
-\item{n2}{Number of observations in the experimental group}
-\item{chisq}{An indicator character string (vector of character strings
-If 0 (default), the difference in event rates divided by its standard error under the null hypothesis is used. 
-Otherwise, a Miettinen and Nurminen chi-square statistic for a 2 x 2 table is used.}
-\item{adj}{With \code{adj=1}, the standard variance with a continuity correction is used for a Miettinen and Nurminen test statistic 
-This includes a factor of \eqn{n / (n - 1)} where \eqn{n} is the total sample size. If \code{adj} is not 1, 
-this factor is not applied. The default is \code{adj=0} since nominal Type I error is generally conservative with \code{adj=1}
-(Gordon and Watson, 1996).}
-\item{scale}{\dQuote{Difference}, \dQuote{RR}, \dQuote{OR}; see the \code{scale} parameter documentation above and Details. 
-This is a scalar argument.}
-\item{nsim}{The number of simulations to be performed in \code{simBinomial()}}
-\item{tol}{Default should probably be used; this is used to deal with a rounding issue in interim calculations}
-}
-
-\references{
-Farrington, CP and Manning, G (1990), Test statistics and sample size formulae for comparative binomial trials with null hypothesis
-of non-zero risk difference or non-unity relative risk. \emph{Statistics in Medicine};9:1447-1454.
-
-Fleiss, JL, Tytun, A and Ury (1980), A simple approximation for calculating sample sizes for comparing independent proportions.
-\emph{Biometrics};36:343-346.
-
-Gordon, I and Watson R (1985), The myth of continuity-corrected sample size formulae. \emph{Biometrics};52:71-76.
-
-Miettinin, O and Nurminen, M (1980), Comparative analysis of two rates. \emph{Statistics in Medicine};4:213-226.
-}
-
-\details{
-Testing is 2-sided when a Chi-square statistic is used and 1-sided when a Z-statistic is used.
-Thus, these 2 options will produce substantially different results, in general.
-For non-inferiority, 1-sided testing is appropriate.
-
-You may wish to round sample sizes up using \code{ceiling()}.
-
-Farrington and Manning (1990) begin with event rates \code{p1} and \code{p2} under the alternative hypothesis
-and a difference between these rates under the null hypothesis, \code{delta0}.
-From these values, actual rates under the null hypothesis are computed, which are labeled \code{p10} and \code{p20}
-when \code{outtype=3}.
-The rates \code{p1} and \code{p2} are used to compute a variance for a Z-test comparing rates under the alternative hypothesis,
-which \code{p10} and \code{p20} are used under the null hypothesis.  
-}
-
-\value{
-  \code{testBinomial()} and \code{simBinomial()} each return a vector of either Chi-square or Z test statistics. 
-  These may be compared to an appropriate cutoff point (e.g., \code{qnorm(.975)} or \code{qchisq(.95,1)}).
- 
-  With the default \code{outtype=2}, \code{nBinomial()} returns a list containing two vectors \code{n1} and \code{n2} containing
-  sample sizes for groups 1 and 2, respectively.
-  With \code{outtype=1}, a vector of total sample sizes is returned.
-  With \code{outtype=3}, \code{nBinomial()} returns a list as follows:
-  \item{n}{A vector with total samples size required for each event rate comparison specified}
-  \item{n1}{A vector of sample sizes for group 1 for each event rate comparison specified}
-  \item{n2}{A vector of sample sizes for group 2 for each event rate comparison specified}
-  \item{sigma0}{A vector containing the variance of the treatment effect difference under the null hypothesis}
-  \item{sigma1}{A vector containing the variance of the treatment effect difference under the alternative hypothesis}
-  \item{p1}{As input}
-  \item{p2}{As input}
-  \item{pbar}{Returned only for superiority testing (\code{\delta0=0}), the weighted average of \code{p1} and \code{p2} using weights
-  \code{n1} and \code{n2}}
-
-  When \code{delta0=0}, instead of \code{pbar}, the following 2 vectors are returned (see details):
-  \item{p10}{group 1 treatment effect used for null hypothesis}
-  \item{p20}{group 2 treatment effect used for null hypothesis}
-
-}
-
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}}
-
-\examples{
-# Compute z-test test statistic comparing 39/500 to 13/500
-# use continuity correction in variance
-x <- testBinomial(x1=39, x2=13, n1=500, n2=500, adj=1)
-x
-pnorm(x, lower.tail=FALSE)
-
-# Compute with unadjusted variance
-x0 <- testBinomial(x1=39, x2=23, n1=500, n2=500)
-x0
-pnorm(x0, lower.tail=FALSE)
-
-# Perform 500k simulations to test validity of the above asymptotic p-values 
-sum(as.real(x0) <= simBinomial(p1=.078, p2=.078, n1=500, n2=500, nsim=500000)) / 500000
-sum(as.real(x0) <= simBinomial(p1=.052, p2=.052, n1=500, n2=500, nsim=500000)) / 500000
-
-# Perform a non-inferiority test to see if p2=400 / 500 is within 5% of 
-# p1=410 / 500 use a z-statistic with unadjusted variance
-x <- testBinomial(x1=410, x2=400, n1=500, n2=500, delta0= -.05)
-x
-pnorm(x, lower.tail=FALSE)
-
-# since chi-square tests equivalence (a 2-sided test) rather than non-inferiority (a 1-sided test), 
-# the result is quite different
-pchisq(testBinomial(x1=410, x2=400, n1=500, n2=500, delta0= -.05, chisq=1, 
-                    adj=1), 1, lower.tail=FALSE)
-
-# now simulate the z-statistic witthout continuity corrected variance
-sum(qnorm(.975) <= simBinomial(p1=.8, p2=.8, n1=500, n2=500, nsim=1000000)) / 1000000
-
-# compute a sample size to show non-inferiority with 5% margin, 90% power
-nBinomial(p1=.2, p2=.2, delta0=.05, alpha=.025, sided=1, beta=.1)
-
-# assuming a slight advantage in the experimental group lowers sample size requirement
-nBinomial(p1=.2, p2=.19, delta0=.05, alpha=.025, sided=1, beta=.1)
-
-# compute a sample size for comparing 15\% vs 10\% event rates with 1 to 2 randomization
-nBinomial(p1=.15, p2=.1, beta=.2, ratio=2, alpha=.05)
-
-# now look at total sample size using 1-1 randomization
-nBinomial(p1=.15, p2=.1, beta=.2, alpha=.05)
-
-# look at power plot under different control event rate and
-# relative risk reductions
-p1 <- seq(.075, .2, .000625)
-p2 <- p1 * 2 / 3
-y1 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .75
-y2 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .6
-y3 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .5
-y4 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-plot(p1, y1, type="l", ylab="Sample size", xlab="Control group event rate",
-     ylim=c(0, 6000), lwd=2)
-title(main="Binomial sample size computation for 80 pct power")
-lines(p1, y2, lty=2, lwd=2)
-lines(p1, y3, lty=3, lwd=2)
-lines(p1, y4, lty=4, lwd=2)
-legend(x=c(.15, .2),y=c(4500, 6000),lty=c(2, 1, 3, 4), lwd=2,
-       legend=c("25 pct reduction", "33 pct reduction", "40 pct reduction",
-                "50 pct reduction"))
-}
-
-\keyword{design}

Added: pkg/tex/tmphelp/Rd/binomial.tex
===================================================================
--- pkg/tex/tmphelp/Rd/binomial.tex	                        (rev 0)
+++ pkg/tex/tmphelp/Rd/binomial.tex	2009-05-22 21:27:11 UTC (rev 158)
@@ -0,0 +1,221 @@
+\HeaderA{Binomial}{3.2: Testing, Confidence Intervals and Sample Size for Comparing Two Binomial Rates}{Binomial}
+\aliasA{ciBinomial}{Binomial}{ciBinomial}
+\aliasA{nBinomial}{Binomial}{nBinomial}
+\aliasA{simBinomial}{Binomial}{simBinomial}
+\aliasA{testBinomial}{Binomial}{testBinomial}
+\keyword{design}{Binomial}
+\begin{Description}\relax
+Support is provided for sample size estimation, testing confidence intervals and simulation for fixed sample size trials 
+(that is, not group sequential or adaptive) with two arms and binary outcomes. 
+Both superiority and non-inferiority trials are considered.
+While all routines default to comparisons of risk-difference, 
+options to base computations on risk-ratio and odds-ratio are also included. 
+
+\code{nBinomial()} computes sample size using the method of Farrington and 
+Manning (1990) to derive sample size required to power a trial to test the difference between two binomial event rates. 
+The routine can be used for a test of superiority or non-inferiority.
+For a design that tests for superiority \code{nBinomial()} is consistent with the method of Fleiss, Tytun, and Ury (but without the continuity correction) to test for differences between event rates.
+This routine is consistent with the Hmisc package routine \code{bsamsize} for superiority designs.
+Vector arguments allow computing sample sizes for multiple scenarios for comparative purposes.
+
+\code{testBinomial()} computes a Z- or Chi-square-statistic that compares two binomial event rates using 
+the method of Miettinen and Nurminen (1980). This can be used for superiority or non-inferiority testing.
+Vector arguments allow easy incorporation into simulation routines for fixed, group sequential and adaptive designs.
+
+\code{ciBinomial()} computes confidence intervals for 1) the difference between two rates, 2) the risk-ratio for two rates 
+or 3) the odds-ratio for two rates. This procedure provides inference that is consistent with \code{testBinomial()} in that 
+the confidence intervals are produced by inverting the testing procedures in \code{testBinomial()}.
+The Type I error \code{alpha} input to \code{ciBinomial} is always interpreted as 2-sided.
+
+\code{simBinomial()} performs simulations to estimate the power for a Miettinin and Nurminen (1980) test
+comparing two binomial rates for superiority or non-inferiority. 
+As noted in documentation for \code{bpower.sim()} in the HMisc package, by using \code{testBinomial()} you can see that the formulas without any continuity correction are quite accurate. 
+In fact, Type I error for a continuity-corrected test is significantly lower (Gordon and Watson, 1996) than the nominal rate. 
+Thus, as a default no continuity corrections are performed.
+\end{Description}
+\begin{Usage}
+\begin{verbatim}
+nBinomial(p1, p2, alpha=.025, beta=0.1, delta0=0, ratio=1,
+          sided=1, outtype=1, scale="Difference") 
+testBinomial(x1, x2, n1, n2, delta0=0, chisq=0, adj=0,
+             scale="Difference", tol=.1e-10)
+ciBinomial(x1, x2, n1, n2, alpha=.05, adj=0, scale="Difference")
[TRUNCATED]

To get the complete diff run:
    svnlook diff /svnroot/gsdesign -r 158


More information about the Gsdesign-commits mailing list