[Gsdesign-commits] r157 - pkg/tex/tmphelp/tex

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Fri May 22 23:22:29 CEST 2009


Author: keaven
Date: 2009-05-22 23:22:29 +0200 (Fri, 22 May 2009)
New Revision: 157

Added:
   pkg/tex/tmphelp/tex/Rd2latex.py
   pkg/tex/tmphelp/tex/bin_trial_doc.tex
   pkg/tex/tmphelp/tex/gsDesign_main_doc.tex
   pkg/tex/tmphelp/tex/other_doc.tex
   pkg/tex/tmphelp/tex/spending_functions_doc.tex
Removed:
   pkg/tex/tmphelp/tex/FarrMannSS.tex
   pkg/tex/tmphelp/tex/checkScalar.tex
   pkg/tex/tmphelp/tex/sflogisitic.tex
Log:
v 2 manual completion

Deleted: pkg/tex/tmphelp/tex/FarrMannSS.tex
===================================================================
--- pkg/tex/tmphelp/tex/FarrMannSS.tex	2009-05-22 21:17:25 UTC (rev 156)
+++ pkg/tex/tmphelp/tex/FarrMannSS.tex	2009-05-22 21:22:29 UTC (rev 157)
@@ -1,206 +0,0 @@
-\HeaderA{nBinomial}{3.2: Testing, Confidence Intervals and Sample Size for Comparing Two Binomial Rates}{nBinomial}
-\aliasA{ciBinomial}{nBinomial}{ciBinomial}
-\aliasA{simBinomial}{nBinomial}{simBinomial}
-\aliasA{testBinomial}{nBinomial}{testBinomial}
-\keyword{design}{nBinomial}
-\begin{Description}\relax
-Support is provided for sample size estimation, testing confidence intervals and simulation for fixed sample size trials 
-(i.e., not group sequential or adaptive) with two arms and binary outcomes. 
-Both superiority and non-inferiority trials are considered.
-While all routines default to comparisons of risk-difference, 
-computations based on risk-ratio and odds-ratio are also provided. 
-
-\code{nBinomial()} computes sample size using the method of Farrington and Manning (1990) to derive sample size
-required to power a trial to test the difference between two binomial event rates. 
-The routine can be used for a test of superiority or non-inferiority.
-For a design that tests for superiority \code{nBinomial()} is consistent with the method of Fleiss, Tytun, and Ury 
-(but without the continuity correction) to test for differences between event rates.
-This routine is consistent with the Hmisc package routine bsamsize for superiority designs.
-Vector arguments allow computing sample sizes for multiple scenarios for comparative purposes.
-
-\code{testBinomial()} computes a Z- or Chi-square-statistic that compares two binomial event rates using 
-the method of Miettinen and Nurminen (1980). This can be used for superiority or non-inferiority testing.
-Vector arguments allow easy incorporation into simulation routines for fixed, group sequential and adaptive designs.
-
-\code{ciBinomial()} computes confidence intervals for 1) the difference between two rates, 2) the risk-ratio for two rates 
-or the odds-ratio for two rates. This procedure provides inference that is consistent with \code{testBinomial} in that 
-the confidence intervals are produced by inverting the testing procedures in \code{testBinomial}.
-
-\code{simBinomial()} performs simulations to estimate the power for a Miettinin and Nurminen (1980) test
-comparing two binomial rates for superiority or non-inferiority. 
-As noted in documentation for \code{bpower.sim()}, by using \code{testBinomial()} you can see that the formulas 
-without any continuity correction are quite accurate. 
-In fact, Type I error for a continuity-corrected test is significantly lower (Gordon and Watson, 1996)
-than the nominal rate. 
-Thus, as a default no continuity corrections are performed.
-\end{Description}
-\begin{Usage}
-\begin{verbatim}
-nBinomial(p1, p2, alpha=.025, beta=0.1, delta0=0, ratio=1, sided=1, outtype=1,
-          scale="Difference") 
-testBinomial(x1, x2, n1, n2, delta0=0, chisq=0, adj=0,
-             scale="Difference", tol=.1e-10)
-ciBinomial(x1, x2, n1, n2, alpha=.05, adj=0, scale="Difference")
-simBinomial(p1, p2, n1, n2, delta0=0, nsim=10000, chisq=0, adj=0,
-            scale="Difference")
-\end{verbatim}
-\end{Usage}
-\begin{Arguments}
-For \code{simBinomial()} all arguments must have length 1.
-In general, arguments for \code{nBinomial()}, \code{testBinomial()} and \code{ciBinomial} may be scalars or vectors, 
-in which case they return a vector of sample sizes and powers, respectively.
-There can be a mix of scalar and vector arguments. 
-All arguments specified using vectors must have the same length.  
-
-\begin{ldescription}
-\item[\code{p1}] event rate in group 1 under the alternative hypothesis
-\item[\code{p2}] event rate in group 2 under the alternative hypothesis
-\item[\code{alpha}] type I error; see \code{sided} below to distinguish between 1- and 2-sided tests
-\item[\code{beta}] type II error
-\item[\code{delta0}] A value of 0 (the default) always represents no difference between treatment groups under the null hypothesis.
-\code{delta0} is interpreted differently depending on the value of the parameter \code{scale}. 
-If \code{scale="Difference"} (the default), 
-\code{delta0} is the difference in event rates under the null hypothesis.
-If \code{scale="RR"}, 
-\code{delta0} is the relative risk of event rates under the null hypothesis minus 1 (\code{p2 / p1 - 1}).
-If \code{scale="LNOR"}, 
-\code{delta0} is the difference in natural logarithm of the odds-ratio under the null hypothesis
-\code{log(p2 / (1 - p2)) - log(p1 / (1 - p1))}.
-
-\item[\code{ratio}] sample size ratio for group 2 divided by group 1
-\item[\code{sided}] 2 for 2-sided test, 1 for 1-sided test
-\item[\code{outtype}] 1 (default) returns total sample size; 2 returns sample size for each group (\code{n1, n2}); 
-
-\item[\code{x1}] Number of "successes" in the control group
-\item[\code{x2}] Number of "successes" in the experimental group
-\item[\code{n1}] Number of observations in the control group
-\item[\code{n2}] Number of observations in the experimental group
-\item[\code{chisq}] An indicator character string (vector of character strings
-If 0 (default), the difference in event rates divided by its standard error under the null hypothesis is used. 
-Otherwise, a Miettinen and Nurminen chi-square statistic for a 2 x 2 table is used.
-\item[\code{adj}] With \code{adj=1}, the standard variance with a continuity correction is used for a Miettinen and Nurminen test statistic 
-This includes a factor of n / (n - 1) where n is the total sample size. If \code{adj} is not 1, 
-this factor is not applied. The default is \code{adj=0} since nominal Type I error is generally conservative with \code{adj=1}
-(Gordon and Watson, 1996).
-\item[\code{scale}] "Difference", "RR", "OR"; see the \code{scale} parameter documentation above and Details. 
-This is a scalar argument.
-\item[\code{nsim}] The number of simulations to be performed in \code{simBinomial()}
-\item[\code{tol}] Default should probably be used; this is used to deal with a rounding issue in interim calculations
-\end{ldescription}
-\end{Arguments}
-\begin{Details}\relax
-Testing is 2-sided when a Chi-square statistic is used and 1-sided when a Z-statistic is used.
-Thus, these 2 options will produce substantially different results, in general.
-For non-inferiority, 1-sided testing is appropriate.
-
-You may wish to round sample sizes up using \code{ceiling()}.
-
-Farrington and Manning (1990) begin with event rates \code{p1} and \code{p2} under the alternative hypothesis
-and a difference between these rates under the null hypothesis, \code{delta0}.
-From these values, actual rates under the null hypothesis are computed, which are labeled \code{p10} and \code{p20}
-when \code{outtype=3}.
-The rates \code{p1} and \code{p2} are used to compute a variance for a Z-test comparing rates under the alternative hypothesis,
-which \code{p10} and \code{p20} are used under the null hypothesis.
-\end{Details}
-\begin{Value}
-\code{testBinomial} and \code{simBinomial} each return a vector of either Chi-square or Z test statistics. 
-These may be compared to an appropriate cutoff point (e.g., \code{qnorm(.975)} or \code{qchisq(.95,1)}).
-
-With the default \code{outtype=2}, \code{nBinomial()} returns a list containing two vectors \code{n1} and \code{n2} containing
-sample sizes for groups 1 and 2, respectively.
-With \code{outtype=1}, a vector of total sample sizes is returned.
-With \code{outtype=3}, \code{nBinomial()} returns a list as follows:
-\begin{ldescription}
-\item[\code{n}] A vector with total samples size required for each event rate comparison specified
-\item[\code{n1}] A vector of sample sizes for group 1 for each event rate comparison specified
-\item[\code{n2}] A vector of sample sizes for group 2 for each event rate comparison specified
-\item[\code{sigma0}] A vector containing the variance of the treatment effect difference under the null hypothesis
-\item[\code{sigma1}] A vector containing the variance of the treatment effect difference under the alternative hypothesis
-\item[\code{p1}] As input
-\item[\code{p2}] As input
-\item[\code{pbar}] Returned only for superiority testing (\code{\bsl{}delta0=0}), the weighted average of \code{p1} and \code{p2} using weights
-\code{n1} and \code{n2}
-\item[\code{p10}] group 1 treatment effect used for null hypothesis
-\item[\code{p20}] group 2 treatment effect used for null hypothesis
-\end{ldescription}
-\end{Value}
-\begin{Author}\relax
-Keaven Anderson \email{keaven\_anderson at merck.com}
-\end{Author}
-\begin{References}\relax
-Farrington, CP and Manning, G (1990), Test statistics and sample size formulae for comparative binomial trials with null hypothesis
-of non-zero risk difference or non-unity relative risk. \emph{Statistics in Medicine};9:1447-1454.
-
-Fleiss, JL, Tytun, A and Ury (1980), A simple approximation for calculating sample sizes for comparing independent proportions.
-\emph{Biometrics};36:343-346.
-
-Gordon, I and Watson R (1985), The myth of continuity-corrected sample size formulae. \emph{Biometrics};52:71-76.
-
-Miettinin, O and Nurminen, M (1980), Comparative analysis of two rates. \emph{Statistics in Medicine};4:213-226.
-\end{References}
-\begin{Examples}
-\begin{ExampleCode}
-# Compute z-test test statistic comparing 39/500 to 13/500
-# use continuity correction in variance
-x <- testBinomial(x1=39, x2=13, n1=500, n2=500, adj=1)
-x
-pnorm(x, lower.tail=FALSE)
-
-# Compute with unadjusted variance
-x0 <- testBinomial(x1=39, x2=23, n1=500, n2=500)
-x0
-pnorm(x0, lower.tail=FALSE)
-
-# Perform 500k simulations to test validity of the above asymptotic p-values 
-sum(as.real(x0) <= simBinomial(p1=.078, p2=.078, n1=500, n2=500, nsim=500000)) / 500000
-sum(as.real(x0) <= simBinomial(p1=.052, p2=.052, n1=500, n2=500, nsim=500000)) / 500000
-
-# Perform a non-inferiority test to see if p2=400 / 500 is within 5
-# p1=410 / 500 use a z-statistic with unadjusted variance
-x <- testBinomial(x1=410, x2=400, n1=500, n2=500, delta0= -.05)
-x
-pnorm(x, lower.tail=FALSE)
-
-# since chi-square tests equivalence (a 2-sided test) rather than non-inferiority (a 1-sided test), 
-# the result is quite different
-pchisq(testBinomial(x1=410, x2=400, n1=500, n2=500, delta0= -.05, chisq=1, 
-                    adj=1), 1, lower.tail=FALSE)
-
-# now simulate the z-statistic witthout continuity corrected variance
-sum(qnorm(.975) <= simBinomial(p1=.8, p2=.8, n1=500, n2=500, nsim=1000000)) / 1000000
-
-# compute a sample size to show non-inferiority with 5
-nBinomial(p1=.2, p2=.2, delta0=.05, alpha=.025, sided=1, beta=.1)
-
-# assuming a slight advantage in the experimental group lowers sample size requirement
-nBinomial(p1=.2, p2=.19, delta0=.05, alpha=.025, sided=1, beta=.1)
-
-# compute a sample size for comparing 15% vs 10% event rates with 1 to 2 randomization
-nBinomial(p1=.15, p2=.1, beta=.2, ratio=2, alpha=.05)
-
-# now look at total sample size using 1-1 randomization
-nBinomial(p1=.15, p2=.1, beta=.2, alpha=.05)
-
-# look at power plot under different control event rate and
-# relative risk reductions
-p1 <- seq(.075, .2, .000625)
-p2 <- p1 * 2 / 3
-y1 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .75
-y2 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .6
-y3 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-p2 <- p1 * .5
-y4 <- nBinomial(p1, p2, beta=.2, outtype=1, alpha=.025, sided=1)
-plot(p1, y1, type="l", ylab="Sample size", xlab="Control group event rate",
-     ylim=c(0, 6000), lwd=2)
-title(main="Binomial sample size computation for 80 pct power")
-lines(p1, y2, lty=2, lwd=2)
-lines(p1, y3, lty=3, lwd=2)
-lines(p1, y4, lty=4, lwd=2)
-legend(x=c(.15, .2),y=c(4500, 6000),lty=c(2, 1, 3, 4), lwd=2,
-       legend=c("25 pct reduction", "33 pct reduction", "40 pct reduction",
-                "50 pct reduction"))
-\end{ExampleCode}
-\end{Examples}
-

Added: pkg/tex/tmphelp/tex/Rd2latex.py
===================================================================
--- pkg/tex/tmphelp/tex/Rd2latex.py	                        (rev 0)
+++ pkg/tex/tmphelp/tex/Rd2latex.py	2009-05-22 21:22:29 UTC (rev 157)
@@ -0,0 +1,118 @@
+#!/usr/bin/env python
+
+# convert Rd files to tex files, so they can be included
+# as part of the user manual
+
+import sys, os
+
+
+
+# gsDesign package help file
+gsDesign_package_files = [
+    "gsDesign-package",
+    ]
+
+# second set of gsDesign files
+main_gs_design_files = [
+    "gsDesign",
+    "gsProbability",
+    "plot.gsDesign",
+    "gsCP",
+    "gsBoundCP",
+    "gsbound",
+    ]
+
+# third set of gsDesign files
+bin_trial_files = [
+    "normalGrid",
+    "binomial",
+    "nSurvival",
+    ]
+
+# spending function files
+spendfun_files = [
+    "spendingfunctions",
+    "sfHSD",
+    "sfpower",
+    "sfexp",
+    "sfLDPocock",
+    "sfpoints",
+    "sflogistic",
+    "sfTDist",
+    ]
+
+# other files
+other_files = [
+    "Wang-Tsiatis-bounds",
+    "testutils",
+    ]
+
+all_files = gsDesign_package_files + main_gs_design_files + bin_trial_files + spendfun_files + other_files
+
+
+# Make temporary directories to hold the Rd and tex files
+#os.system("mkdir ./tmphelp")
+#os.system("mkdir ./tmphelp/Rd")
+#os.system("mkdir ./tmphelp/tex")
+
+# source paths for all the help files
+gsDesign_man_path = "../man/"
+
+Rd_ext = ".Rd"
+latex_ext = ".tex"
+src_path = "./tmphelp/Rd/"
+dest_path = "./tmphelp/tex/"
+cmd = "R CMD Rdconv --type=latex "
+output_files = ["gsDesign_package", "gsDesign_main", "bin_trial", "spending_functions", "other"] 
+
+# copy all the source files from their SVN directory to the tmp .Rd dir
+for file_name in all_files:
+    cp_cmd = "cp " + gsDesign_man_path + file_name + Rd_ext + " " + src_path + file_name + Rd_ext
+    os.system(cp_cmd)    
+
+
+# convert Rd to tex
+for file_name in all_files:
+    exec_cmd = cmd + src_path + file_name + Rd_ext + " " + "-o " + dest_path + file_name + latex_ext
+    os.system(exec_cmd)
+                       
+
+# check to see all tex files are generated
+for file_name in all_files:
+    if not os.path.exists(dest_path + file_name + latex_ext):
+        print "ERROR: " + dest_path + file_name + latex_ext + " was not created"
+        sys.exit(1)
+        
+
+# construct all tex files into five main tex files based on their classification
+for out_file in output_files:
+    f = file(out_file + "_doc.tex", 'w')
+
+    if out_file=="gsDesign_package":
+        content = "\section{Function and Class Reference}\n" + "\subsection{gsDesign Package}\n"
+        for f_name in gsDesign_package_files:
+            content = content + "\input{" + dest_path + f_name + "}\n"
+    elif out_file=="gsDesign_main":
+        content = "\subsection{gsDesign main functions}"
+        for f_name in main_gs_design_files:
+            content = content + "\input{" + dest_path + f_name + "}\n"
+    elif out_file=="bin_trial":
+        content = "\subsection{Binomial trial functions}"
+        for f_name in bin_trial_files:
+            content = content + "\input{" + dest_path + f_name + "}\n"
+    elif out_file=="spending_functions":
+        content = "\subsection{Spending Functions}"
+        for f_name in spendfun_files:
+            content = content + "\input{" + dest_path + f_name + "}\n"
+    elif out_file=="other":
+        content = "\subsection{Other Files}"
+        for f_name in other_files:
+            content = content + "\input{" + dest_path + f_name + "}\n"
+    else:
+        print "ERROR: file " + out_file + " is not specified"
+        exit(1)
+    
+    f.write(content)
+    f.close() # close file
+    
+

Added: pkg/tex/tmphelp/tex/bin_trial_doc.tex
===================================================================
--- pkg/tex/tmphelp/tex/bin_trial_doc.tex	                        (rev 0)
+++ pkg/tex/tmphelp/tex/bin_trial_doc.tex	2009-05-22 21:22:29 UTC (rev 157)
@@ -0,0 +1,3 @@
+\subsection{Binomial trial functions}\input{./tmphelp/tex/normalGrid}
+\input{./tmphelp/tex/binomial}
+\input{./tmphelp/tex/nSurvival}

Deleted: pkg/tex/tmphelp/tex/checkScalar.tex
===================================================================
--- pkg/tex/tmphelp/tex/checkScalar.tex	2009-05-22 21:17:25 UTC (rev 156)
+++ pkg/tex/tmphelp/tex/checkScalar.tex	2009-05-22 21:22:29 UTC (rev 157)
@@ -1,85 +0,0 @@
-\HeaderA{checkScalar}{verify variable properties}{checkScalar}
-\aliasA{checkLengths}{checkScalar}{checkLengths}
-\aliasA{checkRange}{checkScalar}{checkRange}
-\aliasA{checkVector}{checkScalar}{checkVector}
-\aliasA{isInteger}{checkScalar}{isInteger}
-\keyword{programming}{checkScalar}
-\begin{Description}\relax
-Utility functions to verify an objects's properties including whether it is a scalar or vector,
-the class, the length, and (if numeric) whether the range of values is on a specified interval. Additionally,
-the \code{checkLengths} function can be used to ensure that all the supplied inputs have equal lengths.
-\end{Description}
-\begin{Usage}
-\begin{verbatim}
-isInteger(x)
-checkScalar(x, isType = "numeric", ...)
-checkVector(x, isType = "numeric", ..., length=NULL) 
-checkRange(x, interval = 0:1, inclusion = c(TRUE, TRUE), varname = deparse(substitute(x)))
-checkLengths(...)
-\end{verbatim}
-\end{Usage}
-\begin{Arguments}
-\begin{ldescription}
-\item[\code{x}] any object.
-\item[\code{isType}] character string defining the class that the input object is expected to be.
-\item[\code{length}] integer specifying the expected length of the object in the case it is a vector. If \code{length=NULL}, the default,
-then no length check is performed.
-\item[\code{interval}] two-element numeric vector defining the interval over which the input object is expected to be contained. 
-Use the \code{inclusion} argument to define the boundary behavior.
-\item[\code{inclusion}] two-element logical vector defining the boundary behavior of the specified interval. A \code{TRUE} value
-denotes inclusion of the corresponding boundary. For example, if \code{interval=c(3,6)} and \code{inclusion=c(FALSE,TRUE)},
-then all the values of the input object are verified to be on the interval (3,6].
-\item[\code{varname}] character string defining the name of the input variable as sent into the function by the caller. 
-This is used primarily as a mechanism to specify the name of the variable being tested when \code{checkRange} is being called
-within a function.
-\item[\code{...}] For the \code{\LinkA{checkScalar}{checkScalar}} and \code{\LinkA{checkVector}{checkVector}} functions, this input represents additional 
-arguments sent directly to the \code{\LinkA{checkRange}{checkRange}} function. For the \code{\LinkA{checkLengths}{checkLengths}} function, this input
-represents the arguments to check for equal lengths.
-\end{ldescription}
-\end{Arguments}
-\begin{Details}\relax
-\code{isInteger} is similar to \code{\LinkA{is.integer}{is.integer}} except that \code{isInteger(1)} returns \code{TRUE} whereas \code{is.integer(1)} returns \code{FALSE}.
-
-\code{checkScalar} is used to verify that the input object is a scalar as well as the other properties specified above. 
-
-\code{checkVector} is used to verify that the input object is an atomic vector as well as the other properties as defined above.
-
-\code{checkRange} is used to check whether the numeric input object's values reside on the specified interval. 
-If any of the values are outside the specified interval, a \code{FALSE} is returned.
-
-\code{checkLength} is used to check whether all of the supplied inputs have equal lengths.
-\end{Details}
-\begin{Examples}
-\begin{ExampleCode}
-# check whether input is an integer
-isInteger(1)
-isInteger(1:5)
-try(isInteger("abc")) # expect error
-
-# check whether input is an integer scalar
-checkScalar(3, "integer")
-
-# check whether input is an integer scalar that resides 
-# on the interval on [3, 6]. Then test for interval (3, 6].
-checkScalar(3, "integer", c(3,6))
-try(checkScalar(3, "integer", c(3,6), c(FALSE, TRUE))) # expect error
-
-# check whether the input is an atomic vector of class numeric,
-# of length 3, and whose value all reside on the interval [1, 10)
-x <- c(3, pi, exp(1))
-checkVector(x, "numeric", c(1, 10), c(TRUE, FALSE), length=3)
-
-# do the same but change the expected length
-try(checkVector(x, "numeric", c(1, 10), c(TRUE, FALSE), length=2)) # expect error
-
-# create faux function to check input variable
-foo <- function(moo) checkVector(moo, "character")
-foo(letters)
-try(foo(1:5)) # expect error with function and argument name in message
-
-# check for equal lengths of various inputs
-checkLengths(1:2, 2:3, 3:4)
-try(checkLengths(1,2,3,4:5)) # expect error
-\end{ExampleCode}
-\end{Examples}
-

Added: pkg/tex/tmphelp/tex/gsDesign_main_doc.tex
===================================================================
--- pkg/tex/tmphelp/tex/gsDesign_main_doc.tex	                        (rev 0)
+++ pkg/tex/tmphelp/tex/gsDesign_main_doc.tex	2009-05-22 21:22:29 UTC (rev 157)
@@ -0,0 +1,6 @@
+\subsection{gsDesign main functions}\input{./tmphelp/tex/gsDesign}
+\input{./tmphelp/tex/gsProbability}
+\input{./tmphelp/tex/plot.gsDesign}
+\input{./tmphelp/tex/gsCP}
+\input{./tmphelp/tex/gsBoundCP}
+\input{./tmphelp/tex/gsbound}

Added: pkg/tex/tmphelp/tex/other_doc.tex
===================================================================
--- pkg/tex/tmphelp/tex/other_doc.tex	                        (rev 0)
+++ pkg/tex/tmphelp/tex/other_doc.tex	2009-05-22 21:22:29 UTC (rev 157)
@@ -0,0 +1,2 @@
+\subsection{Other Files}\input{./tmphelp/tex/Wang-Tsiatis-bounds}
+\input{./tmphelp/tex/testutils}

Deleted: pkg/tex/tmphelp/tex/sflogisitic.tex
===================================================================
--- pkg/tex/tmphelp/tex/sflogisitic.tex	2009-05-22 21:17:25 UTC (rev 156)
+++ pkg/tex/tmphelp/tex/sflogisitic.tex	2009-05-22 21:22:29 UTC (rev 157)
@@ -1,137 +0,0 @@
-\HeaderA{sfLogistic}{4.6: 2-parameter Spending Function Families}{sfLogistic}
-\aliasA{sfBetaDist}{sfLogistic}{sfBetaDist}
-\aliasA{sfCauchy}{sfLogistic}{sfCauchy}
-\aliasA{sfExtremeValue}{sfLogistic}{sfExtremeValue}
-\aliasA{sfExtremeValue2}{sfLogistic}{sfExtremeValue2}
-\aliasA{sfNormal}{sfLogistic}{sfNormal}
-\keyword{design}{sfLogistic}
-\begin{Description}\relax
-The functions \code{sfLogistic()}, \code{sfNormal()}, \code{sfExtremeValue()}, \code{sfExtremeValue2()}, \code{sfCauchy()}, 
-and\bsl{} \code{sfBetaDist()} are all 2-parameter spending function families.
-These provide increased flexibility in some situations where the flexibility of a one-parameter spending function 
-family is not sufficient.
-These functions all allow fitting of two points on a cumulative spending function curve; in this case, four parameters
-are specified indicating an x and a y coordinate for each of 2 points.
-Normally each of these functions will be passed to \code{gsDesign()} in the parameter 
-\code{sfu} for the upper bound or
-\code{sfl} for the lower bound to specify a spending function family for a design.
-In this case, the user does not need to know the calling sequence.
-The calling sequence is useful, however, when the user wishes to plot a spending function as demonstrated below
-in examples; note, however, that an automatic alpha- and beta-spending function plot is also available.
-\end{Description}
-\begin{Usage}
-\begin{verbatim}
-sfLogistic(alpha, t, param)
-sfNormal(alpha, t, param)
-sfExtremeValue(alpha, t, param)
-sfExtremeValue2(alpha, t, param)
-sfCauchy(alpha, t, param)
-sfBetaDist(alpha, t, param)
-\end{verbatim}
-\end{Usage}
-\begin{Arguments}
-\begin{ldescription}
-\item[\code{alpha}] Real value > 0 and no more than 1. Normally, alpha=0.025 for one-sided Type I error specification
-or 0.1 for Type II error specification. However, this could be set to 1 if for descriptive purposes
-you wish to see the proportion of spending as a function of the proportion of sample size or information.
-\item[\code{t}] A vector of points with increasing values from 0 to 1, inclusive. Values of the proportion of 
-sample size or information for which the spending function will be computed.
-\item[\code{param}] In the two-parameter specification, \code{sfBetaDist()} requires 2 positive values, while
-\code{sfLogistic()}, \code{sfNormal()}, \code{sfExtremeValue()}, \code{sfExtremeValue2()} and \code{sfCauchy()} require the first parameter 
-to be any real value and the second to be a positive value. 
-The four parameter specification is \code{c(t1,t2,u1,u2)}
-where the objective is that \code{sf(t1)=alpha*u1} and \code{sf(t2)=alpha*u2}.
-In this parameterization, all four values must be between 0 and 1 and \code{t1 < t2}, \code{u1 < u2}.
-
-\end{ldescription}
-\end{Arguments}
-\begin{Details}\relax
-\code{sfBetaDist(alpha,t,param)} is simply \code{alpha} times the incomplete beta cumulative distribution 
-function with parameters
-\eqn{a}{} and \eqn{b}{} passed in \code{param} evaluated at values passed in \code{t}. 
-
-The other spending functions take the form
-\deqn{\alpha F(a+bF^{-1}(t))}{}
-where \eqn{F()}{} is a cumulative distribution function with values >0 on the real line (logistic for \code{sfLogistic()}, 
-normal for \code{sfNormal()}, extreme value for \code{sfExtremeValue()} and Cauchy for \code{sfCauchy()}) and
-\eqn{F^{-1}()}{} is its inverse.
-
-For the logistic spending function this simplifies to
-\deqn{\alpha (1-(1+e^a(t/(1-t))^b)^{-1}).}{}
-
-For the extreme value distribution with \deqn{F(x)=\exp(-\exp(-x))}{} this simplifies to 
-\deqn{\alpha \exp(-e^a (-\ln t)^b).}{} Since the extreme value distribution is not symmetric, there is also a version
-where the standard distribution is flipped about 0. This is reflected in \code{sfExtremeValue2()} where
-\deqn{F(x)=1-\exp(-\exp(x)).}{}
-\end{Details}
-\begin{Value}
-An object of type \code{spendfn}. See \code{\LinkA{Spending Functions}{Spending Functions}} for further details.
-\end{Value}
-\begin{Note}\relax
-The manual is not linked to this help file, but is available in library/gsdesign/doc/manual.pdf
-in the directory where R is installed.
-\end{Note}
-\begin{Author}\relax
-Keaven Anderson \email{keaven\_anderson at merck.com}, Jennifer Sun, John Zhang
-\end{Author}
-\begin{References}\relax
-Jennison C and Turnbull BW (2000), \emph{Group Sequential Methods with Applications to Clinical Trials}.
-Boca Raton: Chapman and Hall.
-\end{References}
-\begin{SeeAlso}\relax
-\LinkA{Spending Functions}{Spending Functions}, \code{\LinkA{gsDesign}{gsDesign}}, \LinkA{gsDesign-package}{gsDesign.Rdash.package}
-\end{SeeAlso}
-\begin{Examples}
-\begin{ExampleCode}
-# design a 4-analysis trial using a Kim-DeMets spending function 
-# for both lower and upper bounds 
-x<-gsDesign(k=4, sfu=sfPower, sfupar=3, sfl=sfPower, sflpar=1.5)
-
-# print the design
-x
-
-# plot the alpha- and beta-spending functions
-plot(x, plottype=5)
-
-# start by showing how to fit two points with sfLogistic
-# plot the spending function using many points to obtain a smooth curve
-# note that curve fits the points x=.1,  y=.01 and x=.4,  y=.1 
-# specified in the 3rd parameter of sfLogistic
-plot(0:100/100,  sfLogistic(1, 0:100/100, c(.1, .4, .01, .1))$spend, 
-    xlab="Proportion of final sample size", 
-    ylab="Cumulative Type I error spending", 
-    main="Logistic Spending Function Examples", 
-type="l", cex.main=.9)
-lines(0:100/100, sfLogistic(1, 0:100/100, c(.01, .1, .1, .4))$spend, lty=2)
-
-# now just give a=0 and b=1 as 3rd parameters for sfLogistic 
-lines(0:100/100, sfLogistic(1, 0:100/100, c(0, 1))$spend, lty=3)
-
-# try a couple with unconventional shapes again using the xy form in the 3rd parameter
-lines(0:100/100, sfLogistic(1, 0:100/100, c(.4, .6, .1, .7))$spend, lty=4)
-lines(0:100/100, sfLogistic(1, 0:100/100, c(.1, .7, .4, .6))$spend, lty=5)
-legend(x=c(.0, .475), y=c(.76, 1.03), lty=1:5, 
-legend=c("Fit (.1, 01) and (.4, .1)", "Fit (.01, .1) and (.1, .4)", 
-    "a=0,  b=1", "Fit (.4, .1) and (.6, .7)", "Fit (.1, .4) and (.7, .6)"))
-
-# set up a function to plot comparsons of all 2-parameter spending functions
-plotsf<-function(alpha, t, param)
-{   
-    plot(t, sfCauchy(alpha, t, param)$spend, xlab="Proportion of enrollment", 
-    ylab="Cumulative spending", type="l", lty=2)
-    lines(t, sfExtremeValue(alpha, t, param)$spend, lty=5)
-    lines(t, sfLogistic(alpha, t, param)$spend, lty=1)
-    lines(t, sfNormal(alpha, t, param)$spend, lty=3)
-    lines(t, sfExtremeValue2(alpha, t, param)$spend, lty=6, col=2)
-    lines(t, sfBetaDist(alpha, t, param)$spend, lty=7, col=3)
-    legend(x=c(.05, .475), y=.025*c(.55, .9), lty=c(1, 2, 3, 5, 6, 7), col=c(1, 1, 1, 1, 2, 3), 
-        legend=c("Logistic", "Cauchy", "Normal", "Extreme value", 
-        "Extreme value 2", "Beta distribution"))
-}
-# do comparison for a design with conservative early spending
-# note that Cauchy spending function is quite different from the others
-param<-c(.25, .5, .05, .1)
-plotsf(.025, t=seq(0, 1, .01), param)
-\end{ExampleCode}
-\end{Examples}
-

Added: pkg/tex/tmphelp/tex/spending_functions_doc.tex
===================================================================
--- pkg/tex/tmphelp/tex/spending_functions_doc.tex	                        (rev 0)
+++ pkg/tex/tmphelp/tex/spending_functions_doc.tex	2009-05-22 21:22:29 UTC (rev 157)
@@ -0,0 +1,8 @@
+\subsection{Spending Functions}\input{./tmphelp/tex/spendingfunctions}
+\input{./tmphelp/tex/sfHSD}
+\input{./tmphelp/tex/sfpower}
+\input{./tmphelp/tex/sfexp}
+\input{./tmphelp/tex/sfLDPocock}
+\input{./tmphelp/tex/sfpoints}
+\input{./tmphelp/tex/sflogistic}
+\input{./tmphelp/tex/sfTDist}



More information about the Gsdesign-commits mailing list