[Gsdesign-commits] r144 - pkg/man

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Mon May 4 22:33:26 CEST 2009


Author: keaven
Date: 2009-05-04 22:33:26 +0200 (Mon, 04 May 2009)
New Revision: 144

Added:
   pkg/man/sflogistic.Rd
Modified:
   pkg/man/Wang-Tsiatis-bounds.Rd
   pkg/man/binomial.Rd
   pkg/man/gsBoundCP.Rd
   pkg/man/gsCP.Rd
   pkg/man/gsDesign-package.Rd
   pkg/man/gsDesign.Rd
   pkg/man/gsProbability.Rd
   pkg/man/gsbound.Rd
   pkg/man/nSurvival.Rd
   pkg/man/normalGrid.Rd
   pkg/man/plot.gsDesign.Rd
   pkg/man/sfHSD.Rd
   pkg/man/sfLDPocock.Rd
   pkg/man/sfTDist.Rd
   pkg/man/sfexp.Rd
   pkg/man/sfpoints.Rd
   pkg/man/sfpower.Rd
   pkg/man/spendingfunctions.Rd
   pkg/man/testutils.Rd
Log:
Help file updates

Modified: pkg/man/Wang-Tsiatis-bounds.Rd
===================================================================
--- pkg/man/Wang-Tsiatis-bounds.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/Wang-Tsiatis-bounds.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -25,11 +25,11 @@
 For O'Brien-Fleming and Pocock designs there is also a calling sequence that does not require a parameter.
 See examples.}
 
-\seealso{\code{\link{Spending Functions}, \link{gsDesign}}, \code{\link{gsProbability}}}
-\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/manual.pdf
+\seealso{\code{\link{Spending function overview}, \link{gsDesign}}, \code{\link{gsProbability}}}
+\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/gsDesignManual.pdf
 in the directory where R is installed.}
 
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang}
+\author{Keaven Anderson \email{keaven\_anderson at merck.}}
 \references{
 Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
 Boca Raton: Chapman and Hall.
@@ -59,4 +59,4 @@
 gsDesign(test.type=1, sfu="WT", sfupar=0)
 }
 
-\keyword{design}
+\keyword{design}

Modified: pkg/man/binomial.Rd
===================================================================
--- pkg/man/binomial.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/binomial.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -1,4 +1,4 @@
-\name{nBinomial}
+\name{Binomial}
 \alias{testBinomial}
 \alias{ciBinomial}
 \alias{nBinomial}
@@ -8,13 +8,12 @@
 (that is, not group sequential or adaptive) with two arms and binary outcomes. 
 Both superiority and non-inferiority trials are considered.
 While all routines default to comparisons of risk-difference, 
-computations based on risk-ratio and odds-ratio are also provided. 
+options to base computations on risk-ratio and odds-ratio are also included. 
 
 \code{nBinomial()} computes sample size using the method of Farrington and 
 Manning (1990) to derive sample size required to power a trial to test the difference between two binomial event rates. 
 The routine can be used for a test of superiority or non-inferiority.
-For a design that tests for superiority \code{nBinomial()} is consistent with the method of Fleiss, Tytun, and Ury 
-(but without the continuity correction) to test for differences between event rates.
+For a design that tests for superiority \code{nBinomial()} is consistent with the method of Fleiss, Tytun, and Ury (but without the continuity correction) to test for differences between event rates.
 This routine is consistent with the Hmisc package routine \code{bsamsize} for superiority designs.
 Vector arguments allow computing sample sizes for multiple scenarios for comparative purposes.
 
@@ -23,16 +22,14 @@
 Vector arguments allow easy incorporation into simulation routines for fixed, group sequential and adaptive designs.
 
 \code{ciBinomial()} computes confidence intervals for 1) the difference between two rates, 2) the risk-ratio for two rates 
-or the odds-ratio for two rates. This procedure provides inference that is consistent with \code{testBinomial()} in that 
+or 3) the odds-ratio for two rates. This procedure provides inference that is consistent with \code{testBinomial()} in that 
 the confidence intervals are produced by inverting the testing procedures in \code{testBinomial()}.
 The Type I error \code{alpha} input to \code{ciBinomial} is always interpreted as 2-sided.
 
 \code{simBinomial()} performs simulations to estimate the power for a Miettinin and Nurminen (1980) test
 comparing two binomial rates for superiority or non-inferiority. 
-As noted in documentation for \code{bpower.sim()}, by using \code{testBinomial()} you can see that the formulas 
-without any continuity correction are quite accurate. 
-In fact, Type I error for a continuity-corrected test is significantly lower (Gordon and Watson, 1996)
-than the nominal rate. 
+As noted in documentation for \code{bpower.sim()} in the HMisc package, by using \code{testBinomial()} you can see that the formulas without any continuity correction are quite accurate. 
+In fact, Type I error for a continuity-corrected test is significantly lower (Gordon and Watson, 1996) than the nominal rate. 
 Thus, as a default no continuity corrections are performed.
 }
 
@@ -49,8 +46,7 @@
 For \code{simBinomial()} and \code{ciBinomial()} all arguments must have length 1.
 For \code{testBinomial()}, \code{x2, x2, n1, n2, delta0, chisq,} and \code{adj} may be vectors.
 For \code{nBinomial()}, \code{p1, p2, beta, delta0} and \code{ratio} may be vectors.
-When one or more arguments for \code{nBinomial()} or \code{testBinomial()} is a vector, 
-the routines return a vector of sample sizes and powers, respectively.
+For \code{nBinomial()} or \code{testBinomial()}, when one or more arguments is a vector, the routines return a vector of sample sizes and powers, respectively.
 Where vector arguments are allowed, there may be a mix of scalar and vector arguments. 
 All arguments specified using vectors must have the same length.  
 
@@ -60,17 +56,14 @@
 \item{beta}{type II error}
 \item{delta0}{A value of 0 (the default) always represents no difference between treatment groups under the null hypothesis.
 \code{delta0} is interpreted differently depending on the value of the parameter \code{scale}. 
-If \code{scale="Difference"} (the default), 
-\code{delta0} is the difference in event rates under the null hypothesis (p10 - p20).
-If \code{scale="RR"}, 
-\code{delta0} is the logarithm of the relative risk of event rates (p10 / p20) under the null hypothesis.
-If \code{scale="LNOR"}, 
-\code{delta0} is the difference in natural logarithm of the odds-ratio under the null hypothesis
+If \code{scale="Difference"} (the default), \code{delta0} is the difference in event rates under the null hypothesis (p10 - p20).
+If \code{scale="RR"}, \code{delta0} is the logarithm of the relative risk of event rates (p10 / p20) under the null hypothesis.
+If \code{scale="LNOR"}, \code{delta0} is the difference in natural logarithm of the odds-ratio under the null hypothesis
 \code{log(p10 / (1 - p10)) - log(p20 / (1 - p20))}.
 }
 \item{ratio}{sample size ratio for group 2 divided by group 1}
 \item{sided}{2 for 2-sided test, 1 for 1-sided test}
-\item{outtype}{1 (default) returns total sample size; 2 returns sample size for each group (\code{n1, n2});
+\item{outtype}{\code{nBinomial} only; (default) returns total sample size; 2 returns sample size for each group (\code{n1, n2});
 3 and \code{delta0=0} returns a list with total sample size (\code{n}), sample size in each group (\code{n1, n2}),
 null and alternate hypothesis variance (\code{sigma0, sigma1}), input event rates (\code{p1, p2}) and null hypothesis event
 rates (\code{p10, p20}). 
@@ -80,7 +73,7 @@
 \item{n1}{Number of observations in the control group}
 \item{n2}{Number of observations in the experimental group}
 \item{chisq}{An indicator of whether or not a chi-square (as opposed to Z) statistic is to be computed.
-If 0 (default), the difference in event rates divided by its standard error under the null hypothesis is used. 
+If \code{delta=0} (default), the difference in event rates divided by its standard error under the null hypothesis is used. 
 Otherwise, a Miettinen and Nurminen chi-square statistic for a 2 x 2 table is used.}
 \item{adj}{With \code{adj=1}, the standard variance with a continuity correction is used for a Miettinen and Nurminen test statistic 
 This includes a factor of \eqn{n / (n - 1)} where \eqn{n} is the total sample size. If \code{adj} is not 1, 
@@ -94,7 +87,7 @@
 
 \references{
 Farrington, CP and Manning, G (1990), Test statistics and sample size formulae for comparative binomial trials with null hypothesis
-of non-zero risk difference or non-unity relative risk. \emph{Statistics in Medicine};9:1447-1454.
+of non-zero risk difference or non-unity relative risk. \emph{Statistics in Medicine}; 9: 1447-1454.
 
 Fleiss, JL, Tytun, A and Ury (1980), A simple approximation for calculating sample sizes for comparing independent proportions.
 \emph{Biometrics};36:343-346.
@@ -113,10 +106,9 @@
 
 Farrington and Manning (1990) begin with event rates \code{p1} and \code{p2} under the alternative hypothesis
 and a difference between these rates under the null hypothesis, \code{delta0}.
-From these values, actual rates under the null hypothesis are computed, which are labeled \code{p10} and \code{p20}
-when \code{outtype=3}.
+From these values, actual rates under the null hypothesis are computed, which are labeled \code{p10} and \code{p20} when \code{outtype=3}.
 The rates \code{p1} and \code{p2} are used to compute a variance for a Z-test comparing rates under the alternative hypothesis,
-which \code{p10} and \code{p20} are used under the null hypothesis.
+while \code{p10} and \code{p20} are used under the null hypothesis.
 
 Sample size with \code{scale="Difference"} produces an error if \code{p1-p2=delta0}. 
 Normally, the alternative hypothesis under consideration would be \code{p1-p2-delta0}$>0$.
@@ -162,8 +154,10 @@
 pnorm(x0, lower.tail=FALSE)
 
 # Perform 500k simulations to test validity of the above asymptotic p-values 
-sum(as.real(x0) <= simBinomial(p1=.078, p2=.078, n1=500, n2=500, nsim=500000)) / 500000
-sum(as.real(x0) <= simBinomial(p1=.052, p2=.052, n1=500, n2=500, nsim=500000)) / 500000
+sum(as.real(x0) <= 
+    simBinomial(p1=.078, p2=.078, n1=500, n2=500, nsim=500000)) / 500000
+sum(as.real(x0) <= 
+    simBinomial(p1=.052, p2=.052, n1=500, n2=500, nsim=500000)) / 500000
 
 # Perform a non-inferiority test to see if p2=400 / 500 is within 5% of 
 # p1=410 / 500 use a z-statistic with unadjusted variance
@@ -177,7 +171,8 @@
                     adj=1), 1, lower.tail=FALSE)
 
 # now simulate the z-statistic witthout continuity corrected variance
-sum(qnorm(.975) <= simBinomial(p1=.8, p2=.8, n1=500, n2=500, nsim=1000000)) / 1000000
+sum(qnorm(.975) <= 
+    simBinomial(p1=.8, p2=.8, n1=500, n2=500, nsim=1000000)) / 1000000
 
 # compute a sample size to show non-inferiority with 5% margin, 90% power
 nBinomial(p1=.2, p2=.2, delta0=.05, alpha=.025, sided=1, beta=.1)

Modified: pkg/man/gsBoundCP.Rd
===================================================================
--- pkg/man/gsBoundCP.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/gsBoundCP.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -2,10 +2,8 @@
 \alias{gsBoundCP}
 \title{2.5: Conditional Power at Interim Boundaries}
 \description{
-\code{gsBoundCP()} computes the total probability of crossing future upper bounds
-given an interim test statistic at an interim bound.
-For each interim boundary
-assumes an interim test statistic at the boundary and
+\code{gsBoundCP()} computes the total probability of crossing future upper bounds given an interim test statistic at an interim bound.
+For each interim boundary, assumes an interim test statistic at the boundary and
 computes the probability of crossing any of the later upper boundaries.
 }
 
@@ -14,11 +12,8 @@
 }
 \arguments{
 	\item{x}{An object of type \code{gsDesign} or \code{gsProbability}}
-	\item{theta}{if \code{"thetahat"} and \code{class(x)!="gsDesign"}), conditional power computations for
-	each boundary value are computed using estimated treatment effect assuming a test statistic at that boundary
-	(\code{zi/sqrt(x$n.I[i]} at analysis \code{i}, interim test statistic \code{zi} and interim 
-	sample size/statistical information of \code{x$n.I[i]}). Otherwise, conditional power is computed
-	assuming the input scalar value \code{theta}.
+	\item{theta}{if \code{"thetahat"} and \code{class(x)!="gsDesign"}, conditional power computations for each boundary value are computed using estimated treatment effect assuming a test statistic at that boundary
+	(\code{zi/sqrt(x$n.I[i])} at analysis \code{i}, interim test statistic \code{zi} and interim sample size/statistical information of \code{x$n.I[i]}). Otherwise, conditional power is computed assuming the input scalar value \code{theta}.
 	}
 	\item{r}{Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); 
 	default is 18, range is 1 to 80. 
@@ -33,13 +28,14 @@
 given interim test statistics at each upper bound.}
 }
 
+
 \details{
-See Statistical Methods section of manual for further clarification. See also Muller and Schaffer (2001) for background theory.
+See Conditional power section of manual for further clarification. See also Muller and Schaffer (2001) for background theory.
 }
-\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/manual.pdf
+\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/gsDesignManual.pdf
 in the directory where R is installed.}
 
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang}
+\author{Keaven Anderson \email{keaven\_anderson at merck.}}
 \references{
 Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
 Boca Raton: Chapman and Hall.

Modified: pkg/man/gsCP.Rd
===================================================================
--- pkg/man/gsCP.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/gsCP.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -15,7 +15,7 @@
 	an estimated value of \eqn{\theta}{theta} based on the interim test statistic (\code{zi/sqrt(x$n.I[i])}) as well as at \code{x$theta}
 	is computed.}
 	\item{i}{analysis at which interim z-value is given}
-	\item{zi}{interim z-value at analysis i}
+	\item{zi}{interim z-value at analysis i (scalar)}
 	\item{r}{Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); 
 	default is 18, range is 1 to 80. 
 	Larger values provide larger number of grid points and greater accuracy.
@@ -32,12 +32,11 @@
 
 
 \details{
-See Statistical Methods section of manual for further clarification. See also Muller and Schaffer (2001) for background theory.
+See Conditional power section of manual for further clarification. See also Muller and Schaffer (2001) for background theory.
 }
-\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/manual.pdf
-in the directory where R is installed.}
+\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/gsDesignManual.pdf in the directory where R is installed.}
 
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang}
+\author{Keaven Anderson \email{keaven\_anderson at merck.}}
 \references{
 Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
 Boca Raton: Chapman and Hall.

Modified: pkg/man/gsDesign-package.Rd
===================================================================
--- pkg/man/gsDesign-package.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/gsDesign-package.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -1,16 +1,16 @@
-\name{gsDesign-package}
-\alias{gsDesign-package}
+\name{gsDesign package overview}
+\alias{gsDesign package overview}
 \docType{package}
 \title{1.0 Group Sequential Design}
 \description{
-gsDesign is a package that derives group sequential designs.
+gsDesign is a package for deriving and describing group sequential designs.
 The package allows particular flexibility for designs with alpha- and beta-spending.
 Many plots are available for describing design properties.
 }
 \details{
 \tabular{ll}{
 Package: \tab gsDesign\cr
-Version: \tab 2.0\cr
+Version: \tab 2\cr
 License: \tab GPL (version 2 or later)\cr
 }
 
@@ -25,11 +25,11 @@
 MandNtest               3.2: Two-sample binomial sample size
 Survival sample size    3.3: Time-to-event sample size calculation
                         (Lachin-Foulkes)
-Spending Functions      4.0: Spending functions
+Spending function overview      4.0: Spending functions
 sfHSD                   4.1: Hwang-Shih-DeCani Spending Function
 sfPower                 4.2: Kim-DeMets (power) Spending Function
 sfExponential           4.3: Exponential Spending Function
-sfLDPocock              4.4: Lan-DeMets Spending Functions
+sfLDPocock              4.4: Lan-DeMets Spending function overview
 sfPoints                4.5: Pointwise Spending Function
 sfLogistic              4.6: 2-parameter Spending Function Families
 sfTDist                 4.7: t-distribution Spending Function
@@ -39,7 +39,7 @@
 While there is a strong focus on designs using \eqn{\alpha}{alpha}- and \eqn{\beta}{beta}-spending functions, Wang-Tsiatis designs, 
 including O'Brien-Fleming and Pocock designs, are also available.
 The ability to design with non-binding futility rules 
-controls Type I error in a manner acceptable to regulatory authorities when futility bounds are employed. 
+allows control of Type I error in a manner acceptable to regulatory authorities when futility bounds are employed. 
 
 The routines are designed to provide simple access to commonly used designs
 using default arguments. 

Modified: pkg/man/gsDesign.Rd
===================================================================
--- pkg/man/gsDesign.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/gsDesign.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -9,6 +9,7 @@
          sfl=sfHSD, sflpar=-2, tol=0.000001, r=18, n.I = 0, maxn.IPlan = 0) 
 
 print.gsDesign(x,...)}
+
 \arguments{
 	\item{k}{Number of analyses planned, including interim and final.}
 	\item{test.type}{\code{1=}one-sided \cr
@@ -29,24 +30,19 @@
 	\item{timing}{Sets relative timing of interim analyses. Default of 1 produces equally spaced analyses. 
 	Otherwise, this is a vector of length \code{k} or \code{k-1}.
 	The values should satisfy \code{0 < timing[1] < timing[2] < ... < timing[k-1]< timing[k]=1}.}
-	\item{sfu}{A spending function or a character string indicating a boundary type 
-	(that is, \dQuote{WT} for Wang-Tsiatis bounds, \dQuote{OF} for O'Brien-Fleming bounds and \dQuote{Pocock} for Pocock bounds). 
+	\item{sfu}{A spending function or a character string indicating a boundary type (that is, \dQuote{WT} for Wang-Tsiatis bounds, \dQuote{OF} for O'Brien-Fleming bounds and \dQuote{Pocock} for Pocock bounds). 
 	For one-sided and symmetric two-sided testing (\code{test.type=1, 2}), 
 	\code{sfu} is used to completely specify spending. 
 	The default value is \code{sfHSD} which is a Hwang-Shih-DeCani spending function.
-	See details, \link{Spending Functions}, manual and examples.}
-	\item{sfupar}{Real value, default is \eqn{-4} which is an O'Brien-Fleming-like conservative bound when used
-	with the default Hwang-Shih-DeCani spending function. This is a real-vector for many spending functions.
-	The parameter \code{sfupar} specifies any parameters needed for the spending function specified by sfu. 
-	\code{sfupar} will be ignored for spending functions (\code{sfLDOF}, \code{sfLDPocock}) 
+	See details, \link{Spending function overview}, manual and examples.}
+	\item{sfupar}{Real value, default is \eqn{-4} which is an O'Brien-Fleming-like conservative bound when used with the default Hwang-Shih-DeCani spending function. This is a real-vector for many spending functions.
+	The parameter \code{sfupar} specifies any parameters needed for the spending function specified by \code{sfu}; this will be ignored for spending functions (\code{sfLDOF}, \code{sfLDPocock}) 
 	or bound types (\dQuote{OF}, \dQuote{Pocock}) that do not require parameters.}
-	\item{sfl}{Specifies the spending function for lower boundary crossing probabilities when
-	asymmetric, two-sided testing is performed (\code{test.type = 3}, 
+	\item{sfl}{Specifies the spending function for lower boundary crossing probabilities when asymmetric, two-sided testing is performed (\code{test.type = 3}, 
 \code{4}, \code{5}, or \code{6}). 
 	Unlike the upper bound, only spending functions are used to specify the lower bound.
 	The default value is \code{sfHSD} which is a Hwang-Shih-DeCani spending function.
-	The parameter \code{sfl} is ignored for one-sided testing (
-	\code{test.type=1}) or symmetric 2-sided testing (\code{test.type=2}). 
+	The parameter \code{sfl} is ignored for one-sided testing (\code{test.type=1}) or symmetric 2-sided testing (\code{test.type=2}). 
 	See details, spending functions, manual and examples.}
 	\item{sflpar}{Real value, default is \eqn{-2}, which, with the default Hwang-Shih-DeCani spending function, 
 	specifies a less conservative spending rate than the default for the upper bound.}
@@ -74,27 +70,24 @@
 unless it was input as 0; in that case, value will be computed to give desired power for fixed design with input
 sample size \code{n.fix}.}
 \item{n.fix}{Sample size required to obtain desired power when effect size is \code{delta}.}
-\item{timing}{As input.}
+\item{timing}{A vector of length \code{k} containing the portion of the total planned information/sample size at each analysis.}
 \item{tol}{As input.}
 \item{r}{As input.}
 \item{upper}{Upper bound spending function, boundary and boundary crossing probabilities under the NULL and
-alternate hypotheses. See \link{Spending Functions} and manual for further details.}
-\item{lower}{Lower bound spending function, boundary and boundary crossing probability probabilities at each analysis.
+alternate hypotheses. See \link{Spending function overview} and manual for further details.}
+\item{lower}{Lower bound spending function, boundary and boundary crossing probabilities at each analysis.
 Lower spending is under alternative hypothesis (beta spending) for \code{test.type=3} or \code{4}. 
 For \code{test.type=2}, \code{5} or \code{6}, lower spending is under the null hypothesis.
-For \code{test.type=1}, output value is \code{NULL}. See \link{Spending Functions} and manual.}
+For \code{test.type=1}, output value is \code{NULL}. See \link{Spending function overview} and manual.}
 \item{n.I}{Vector of length \code{k}. If values are input, same values are output.
 Otherwise, \code{n.I} will contain the sample size required at each analysis 
 to achieve desired \code{timing} and \code{beta} for the output value of \code{delta}. 
-If \code{delta=0} was input, then this is the sample size required for the specified group sequential design
-when a fixed design requires a sample size of \code{n.fix}.
+If \code{delta=0} was input, then this is the sample size required for the specified group sequential design when a fixed design requires a sample size of \code{n.fix}.
 If \code{delta=0} and \code{n.fix=1} then this is the relative sample size compared to a fixed design; 
-see details and examples.
+see details and examples.}
 \item{maxn.IPlan}{As input.}
 }
-}
 
-
 \details{
 Many parameters normally take on default values and thus do not require explicit specification.
 One- and two-sided designs are supported. Two-sided designs may be symmetric or asymmetric.
@@ -110,20 +103,18 @@
 \code{delta} and \code{n.fix} are used together to determine what sample size output options the user seeks.
 The default, \code{delta=0} and \code{n.fix=1}, results in a \sQuote{generic} design that may be used with any sampling
 situation. Sample size ratios are provided and the user multiplies these times the sample size for a fixed design
-to obtain the corresponding group sequential analysis times. If \code{delta}>0, \code{n.fix} is ignored, and 
+to obtain the corresponding group sequential analysis times. If \code{delta>0}, \code{n.fix} is ignored, and 
 \code{delta} is taken as the standardized effect size - the signal to noise ratio for a single observation;
 for example, the mean divided by the standard deviation for a one-sample normal problem. 
 In this case, the sample size at each analysis is computed. 
-When \code{delta}=0 and \code{n.fix}>1, \code{n.fix} is assumed to be the sample size for a fixed design
+When \code{delta=0} and \code{n.fix>1}, \code{n.fix} is assumed to be the sample size for a fixed design
 with no interim analyses. See examples below. 
 
-Following are further comments on the input argument \code{test.type}.
-The manual may also be worth some review in order to see actual formulas for boundary crossing 
-probabilities for the various options. 
-Options 3 and 5 assume the trial stops if the lower bound is crossed for Type I and Type II error computation 
-(binding lower bound). 
-For the purpose of computing Type I error, options 4 and 6 assume the trial continues if the lower bound is crossed 
-(non-binding lower bound). Beta-spending refers to error spending for the lower bound crossing probabilities
+Following are further comments on the input argument \code{test.type} which is used to control what type of error measurements are used in trial design.
+The manual may also be worth some review in order to see actual formulas for boundary crossing probabilities for the various options. 
+Options 3 and 5 assume the trial stops if the lower bound is crossed for Type I and Type II error computation (binding lower bound). 
+For the purpose of computing Type I error, options 4 and 6 assume the trial continues if the lower bound is crossed (non-binding lower bound); that is a Type I error can be made by crossing an upper bound after crossing a previous lower bound. 
+Beta-spending refers to error spending for the lower bound crossing probabilities
 under the alternative hypothesis (options 3 and 4).
 In this case, the final analysis lower and upper boundaries are assumed to be the same.
 The appropriate total beta spending (power) is determined by adjusting the maximum sample size
@@ -132,12 +123,12 @@
 deriving these designs can take longer than other options.
 Options 5 and 6 compute lower bound spending under the null hypothesis.
 }
-\seealso{\link{gsDesign-package}, \link{Group Sequential Plots}, \code{\link{gsProbability}}, 
-\link{Spending Functions}, \link{Wang-Tsiatis Bounds}}
-\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/manual.pdf
+\seealso{\link{gsDesign package overview}, \link{Plots for group sequential designs}, \code{\link{gsProbability}}, 
+\link{Spending function overview}, \link{Wang-Tsiatis Bounds}}
+\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/gsDesignManual.pdf
 in the directory where R is installed.}
 
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang}
+\author{Keaven Anderson \email{keaven\_anderson at merck.}}
 \references{
 Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
 Boca Raton: Chapman and Hall.

Modified: pkg/man/gsProbability.Rd
===================================================================
--- pkg/man/gsProbability.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/gsProbability.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -41,15 +41,15 @@
 	crossing probabilities. Element \code{i,j} contains the boundary crossing probability at analysis \code{i} for the \code{j}-th
 	element of \code{theta} input. All boundary crossing is assumed to be binding for this computation; 
 	that is, the trial must stop if a boundary is crossed.}
-	\item{upper}{An list of the same form as \code{lower} containing the upper bound and upper boundary crossing probabilities.}
+	\item{upper}{A list of the same form as \code{lower} containing the upper bound and upper boundary crossing probabilities.}
 	\item{en}{A vector of the same length as \code{theta} containing expected sample sizes for the trial design
 	corresponding to each value in the vector \code{theta}.}
 	\item{r}{As input.}
 }
-\seealso{\link{Group Sequential Plots}, \code{\link{gsDesign}}, \link{gsDesign-package}}
-\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/manual.pdf
+\seealso{\link{Plots for group sequential designs}, \code{\link{gsDesign}}, \link{gsDesign package overview}}
+\note{The manual is not linked to this help file, but is available in library/gsdesign/doc/gsDesignManual.pdf
 in the directory where R is installed.}
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang}
+\author{Keaven Anderson \email{keaven\_anderson at merck.}}
 \references{
 Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
 Boca Raton: Chapman and Hall.
@@ -89,7 +89,7 @@
 z
 
 # default plottype is now 2
-# this is the same range for theta,  but plot now has theta on x axis
+# this is the same range for theta, but plot now has theta on x axis
 plot(z)
 }
 \keyword{design}

Modified: pkg/man/gsbound.Rd
===================================================================
--- pkg/man/gsbound.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/gsbound.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -2,16 +2,13 @@
 \alias{gsBound}
 \alias{gsBound1}
 
-\title{Boundary derivation}
+\title{2.6 Boundary derivation - low level}
 \description{\code{gsBound()} and \code{gsBound1()} are lower-level functions used to find boundaries for a group sequential design.
-They are not recommended (especially \code{gsBound1()} for casual users.
+They are not recommended (especially \code{gsBound1()}) for casual users.
 These functions do not adjust sample size as \code{gsDesign()} does to ensure appropriate power for a design.
-The function \code{gsBound1()} requires special attention to detail and knowledge of behavior when a design corresponding to the input
-parameters does not exist.
 
 \code{gsBound()} computes upper and lower bounds given boundary crossing probabilities assuming a mean of 0, the usual null hypothesis.
-\code{gsBound1()} computes the upper bound given a lower boundary, upper boundary crossing probabilities and an arbitrary mean 
-(\code{theta}).}
+\code{gsBound1()} computes the upper bound given a lower boundary, upper boundary crossing probabilities and an arbitrary mean (\code{theta}).}
 \usage{
 gsBound(I, trueneg, falsepos, tol=0.000001, r=18)
 gsBound1(theta, I, a, probhi, tol=0.000001, r=18, printerr=0)
@@ -31,24 +28,20 @@
 	Larger values provide larger number of grid points and greater accuracy.
 	Normally \code{r} will not be changed by the user.}
 	\item{printerr}{If this scalar argument set to 1, this will print messages from underlying C program.
-	Mainly intended to print notify user when an output solution does not match input specifications.
+	Mainly intended to notify user when an output solution does not match input specifications.
 	This is not intended to stop execution as this often occurs when deriving a design in \code{gsDesign}
 	that uses beta-spending.}
 }
 \value{
-  list(k=xx[[1]],theta=0.,I=xx[[2]],a=xx[[3]],b=xx[[4]],rates=rates,tol=xx[[7]],
-        r=xx[[8]],error=xx[[9]])
-    y<-list(k=xx[[1]],theta=xx[[2]],I=xx[[3]],a=xx[[4]],b=xx[[5]],
-            problo=xx[[6]],probhi=xx[[7]],tol=xx[[8]],r=xx[[9]],error=xx[[10]])
 Both routines return a list. Common items returned by the two routines are: 
 \item{k}{The length of vectors input; a scalar.}
 \item{theta}{As input in \code{gsBound1()}; 0 for \code{gsBound()}.}
 \item{I}{As input.}
-\item{a}{}
-\item{b}{}
+\item{a}{For \code{gsbound1}, this is as input. For \code{gsbound} this is the derived lower boundary required to yield the input boundary crossing probabilities under the null hypothesis.}
+\item{b}{The derived upper boundary required to yield the input boundary crossing probabilities under the null hypothesis.}
 \item{tol}{As input.}
 \item{r}{As input.}
-\item{error}{Error code. 0 if no error; > 0 otherwise.}
+\item{error}{Error code. 0 if no error; greater than 0 otherwise.}
 
 \code{gsBound()} also returns the following items:
 \item{rates}{a list containing two items:}
@@ -56,18 +49,18 @@
 \item{trueneg}{vector of lower boundary crossing probabilities as input.}
 
 \code{gsBound1()} also returns the following items:
-\item{problo}{vector of lower boundary crossing probabilities; computed using input lower bound
-and derived upper bound.}
+\item{problo}{vector of lower boundary crossing probabilities; computed using input lower bound and derived upper bound.}
 \item{probhi}{vector of upper boundary crossing probabilities as input.}
 }
 
-\details{Forthcoming...}
+\details{The function \code{gsBound1()} requires special attention to detail and knowledge of behavior when a design corresponding to the input parameters does not exist.
+}
 
-\seealso{\link{gsDesign-package}, \code{\link{gsDesign}}, \code{\link{gsProbability}}}
+\seealso{\link{gsDesign package overview}, \code{\link{gsDesign}}, \code{\link{gsProbability}}}
 \note{The manual is not linked to this help file, but is available in library/gsdesign/doc/gsDesignManual.pdf
 in the directory where R is installed.}
 
-\author{Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang}
+\author{Keaven Anderson \email{keaven\_anderson at merck.}}
 \references{
 Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
 Boca Raton: Chapman and Hall.

Modified: pkg/man/nSurvival.Rd
===================================================================
--- pkg/man/nSurvival.Rd	2009-03-25 13:52:09 UTC (rev 143)
+++ pkg/man/nSurvival.Rd	2009-05-04 20:33:26 UTC (rev 144)
@@ -2,13 +2,13 @@
 \alias{nSurvival}
[TRUNCATED]

To get the complete diff run:
    svnlook diff /svnroot/gsdesign -r 144


More information about the Gsdesign-commits mailing list