[Vegan-commits] r219 - pkg/man

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Sat Feb 16 17:22:40 CET 2008


Author: gsimpson
Date: 2008-02-16 17:22:40 +0100 (Sat, 16 Feb 2008)
New Revision: 219

Modified:
   pkg/man/CCorA.Rd
   pkg/man/cca.Rd
   pkg/man/cca.object.Rd
   pkg/man/decorana.Rd
   pkg/man/decostand.Rd
Log:
Merging r 218 to trunk

Modified: pkg/man/CCorA.Rd
===================================================================
--- pkg/man/CCorA.Rd	2008-02-16 16:19:46 UTC (rev 218)
+++ pkg/man/CCorA.Rd	2008-02-16 16:22:40 UTC (rev 219)
@@ -2,6 +2,7 @@
 \alias{CCorA}
 \alias{print.CCorA}
 \alias{biplot.CCorA}
+\concept{ordination}
 
 \title{Canonical Correlation Analysis}
 
@@ -11,50 +12,57 @@
 
 \usage{
 CCorA(Y, X, stand.Y=FALSE, stand.X=FALSE, nperm = 0, ...)
+
 \method{biplot}{CCorA}(x, xlabs, which = 1:2, ...)
 }
 
 \arguments{
-\item{Y}{ left matrix }
-\item{X}{ right matrix }
-\item{stand.Y}{ Y will be standardized if \code{TRUE} }
-\item{stand.X}{ X1 will be standardized if \code{TRUE} }
-\item{nperm}{Number of permutations to evaluate the significance of
-  Pillai's trace}
-\item{x}{\code{CCoaR} result object}
-\item{xlabs}{Row labels. The default is to use row names, \code{NULL}
-  uses row numbers instead, and \code{NA} suppresses plotting row names
-  completely}
-\item{which}{ \code{1} plots \code{Y} reseults, and
-  \code{2} plots \code{X1} results }
-\item{\dots}{Other parameters passed to functions. \code{biplot.CCorA} passes
-  graphical parameters to \code{\link{biplot}} and
-  \code{\link{biplot.default}}, \code{CCorA} currently ignores extra parameters.}
+  \item{Y}{ left matrix. }
+  \item{X}{ right matrix. }
+  \item{stand.Y}{ logical; should \code{Y} be standardized? }
+  \item{stand.X}{ logical; should \code{X} be standardized? }
+  \item{nperm}{ numeric; Number of permutations to evaluate the
+    significance of Pillai's trace}
+  \item{x}{\code{CCoaR} result object}
+  \item{xlabs}{Row labels. The default is to use row names, \code{NULL}
+    uses row numbers instead, and \code{NA} suppresses plotting row names
+    completely}
+  \item{which}{ \code{1} plots \code{Y} reseults, and
+    \code{2} plots \code{X1} results }
+  \item{\dots}{Other arguments passed to functions. \code{biplot.CCorA}
+    passes graphical arguments to \code{\link{biplot}} and
+    \code{\link{biplot.default}}, \code{CCorA} currently ignores extra 
+    arguments.} 
 }
 
-\details{ Canonical correlation analysis (Hotelling 1936) seeks linear
-combinations of the variables of Y that are maximally correlated to
-linear combinations of the variables of X. The analysis estimates the
-relationships and displays them in graphs.
+\details{
+  Canonical correlation analysis (Hotelling 1936) seeks linear
+  combinations of the variables of \code{Y} that are maximally
+  correlated to linear combinations of the variables of \code{X}. The
+  analysis estimates the relationships and displays them in graphs.
 
-Algorithmic notes:
-\enumerate{
-  \item
-  All data matrices are replaced by their PCA object scores, computed by SVD.
-  \item
-  The blunt approach would be to read the three matrices, compute the
-  covariance matrices, then the matrix
-  \code{S12 \%*\% inv(S22) \%*\% t(S12) \%*\% inv(S11)}.
+  Algorithmic notes:
+  \enumerate{
+    \item
+    All data matrices are replaced by their PCA object scores, computed
+    by SVD.
+    \item
+    The blunt approach would be to read the three matrices, compute the
+    covariance matrices, then the matrix
+    \code{S12 \%*\% inv(S22) \%*\% t(S12) \%*\% inv(S11)}.
     Its trace is Pillai's trace statistic. 
-  \item
-  This approach may fail, however, when there is heavy multicollinearity in very sparse data matrices, as it is the case in 4th-corner inflated data matrices for example. The safe approach is to replace all data matrices by their PCA object scores.
-  \item
-  Inversion by \code{\link{solve}} is avoided. Computation of inverses
-  is done by SVD  (\code{\link{svd}})in most cases.
-  \item
-  Regression by OLS is also avoided. Regression residuals are
-  computed by QR decomposition (\code{\link{qr}}).
-}
+    \item
+    This approach may fail, however, when there is heavy multicollinearity
+    in very sparse data matrices, as it is the case in 4th-corner inflated
+    data matrices for example. The safe approach is to replace all data
+    matrices by their PCA object scores.
+    \item
+    Inversion by \code{\link{solve}} is avoided. Computation of inverses
+    is done by \acronym{SVD}  (\code{\link{svd}}) in most cases.
+    \item
+    Regression by \acronym{OLS} is also avoided. Regression residuals are
+    computed by \acronym{QR} decomposition (\code{\link{qr}}).
+  }
 
 The \code{biplot} function can produce two biplots, each for the left
 matrix and right matrix solutions. The function passes all arguments to
@@ -63,17 +71,22 @@
 }
 
 \value{
-Function \pkg{CCorA} returns a list containing the following results and matrices:
-\item{ Pillai }{ Pillai's trace statistic = sum of canonical eigenvalues. }
-\item{ EigenValues }{ Canonical eigenvalues. They are the squares of the canonical correlations. }
-\item{ CanCorr }{ Canonical correlations. }
-\item{ Mat.ranks }{ Ranks of matrices Y and X1 (possibly after controlling for X2). }
-\item{ RDA.Rsquares }{ Bimultivariate redundancy coefficients (R-squares) of RDAs of Y|X1 and X1|Y. }
-\item{ RDA.adj.Rsq }{ RDA.Rsquares adjusted for n and number of explanatory variables. }
-\item{ AA }{ Scores of Y variables in Y biplot. }
-\item{ BB }{ Scores of X1 variables in X1 biplot. }
-\item{ Cy }{ Object scores in Y biplot. }
-\item{ Cx }{ Object scores in X1 biplot. }
+  Function \code{CCorA} returns a list containing the following components:
+  \item{ Pillai }{ Pillai's trace statistic = sum of canonical
+    eigenvalues. } 
+  \item{ EigenValues }{ Canonical eigenvalues. They are the squares of the
+    canonical correlations. }
+  \item{ CanCorr }{ Canonical correlations. }
+  \item{ Mat.ranks }{ Ranks of matrices \code{Y} and X1 (possibly after
+    controlling for X2). }
+  \item{ RDA.Rsquares }{ Bimultivariate redundancy coefficients
+    (R-squares) of RDAs of Y|X1 and X1|Y. } 
+  \item{ RDA.adj.Rsq }{ \code{RDA.Rsquares} adjusted for n and number of
+    explanatory variables. }
+  \item{ AA }{ Scores of Y variables in Y biplot. }
+  \item{ BB }{ Scores of X1 variables in X1 biplot. }
+  \item{ Cy }{ Object scores in Y biplot. }
+  \item{ Cx }{ Object scores in X1 biplot. }
 }
 
 \references{ 

Modified: pkg/man/cca.Rd
===================================================================
--- pkg/man/cca.Rd	2008-02-16 16:19:46 UTC (rev 218)
+++ pkg/man/cca.Rd	2008-02-16 16:22:40 UTC (rev 219)
@@ -6,8 +6,8 @@
 \alias{rda}
 \alias{rda.default}
 \alias{rda.formula}
+\concept{ordination}
 
-
 \title{ [Partial] [Constrained] Correspondence Analysis and Redundancy
   Analysis } 
 \description{
@@ -37,16 +37,16 @@
     Can be missing. }
   \item{Z}{ Conditioning matrix, the effect of which is removed
     (`partialled out') before next step. Can be missing.}
-  \item{scale}{Scale species to unit variance (like correlations do).}
-  \item{...}{Other parameters for \code{print} or \code{plot} functions
+  \item{scale}{Scale species to unit variance (like correlations).}
+  \item{...}{Other arguments for \code{print} or \code{plot} functions
     (ignored in other functions).}
 }
 \details{
-  Since their introduction (ter Braak 1986), constrained or canonical
-  correspondence analysis, and its spin-off, redundancy analysis have
+  Since their introduction (ter Braak 1986), constrained, or canonical,
+  correspondence analysis and its spin-off, redundancy analysis, have
   been the most popular ordination methods in community ecology.
   Functions \code{cca} and \code{rda} are  similar to popular
-  proprietary software \code{Canoco}, although implementation is
+  proprietary software \code{Canoco}, although the implementation is
   completely different.  The functions are based on Legendre &
   Legendre's (1998) algorithm: in \code{cca}
   Chi-square transformed data matrix is subjected to weighted linear
@@ -55,14 +55,17 @@
   decomposition (\code{\link{svd}}). Function \code{rda} is similar, but uses
   ordinary, unweighted linear regression and unweighted SVD.
 
-  The functions can be called either with matrix entries for community
+  The functions can be called either with matrix-like entries for community
   data and constraints, or with formula interface.  In general, the
   formula interface is preferred, because it allows a better control of
   the model and allows factor constraints.
 
-  In matrix interface, the
-  community data matrix \code{X} must be given, but any other data
-  matrix can be omitted, and the corresponding stage of analysis is
+  In the following sections, \code{X}, \code{Y} and \code{Z}, although
+  referred to as matrices, are more commonly data frames.
+
+  In the matrix interface, the
+  community data matrix \code{X} must be given, but the other data
+  matrices may be omitted, and the corresponding stage of analysis is
   skipped.  If matrix \code{Z} is supplied, its effects are removed from
   the community matrix, and the residual matrix is submitted to the next
   stage.  This is called `partial' correspondence or redundancy
@@ -85,9 +88,9 @@
   \code{\link{contrasts}} are honoured in \code{\link{factor}}
   variables.  The formula can include a special term \code{Condition}
   for conditioning variables (``covariables'') ``partialled out'' before
-  analysis.  So the following commands are equivalent: \code{cca(X, y,
-    z)}, \code{cca(X ~ y + Condition(z))}, where \code{y} and \code{z}
-  refer to single variable constraints and conditions.
+  analysis.  So the following commands are equivalent: \code{cca(X, Y,
+    Z)}, \code{cca(X ~ Y + Condition(Z))}, where \code{Y} and \code{Z}
+  refer to constraints and conditions matrices respectively.
 
   Constrained correspondence analysis is indeed a constrained method:
   CCA does not try to display all variation in the
@@ -106,15 +109,14 @@
   clear and strong \emph{a priori} hypotheses on constraints and is not
   interested in the major structure in the data set.  
 
-  CCA is able to correct a common
-  curve artefact in correspondence analysis by
-  forcing the configuration into linear constraints.  However, the curve
-  artefact can be avoided only with a low number of constraints that do
-  not have a curvilinear relation with each other.  The curve can reappear
-  even with two badly chosen constraints or a single factor.  Although
-  the formula
-  interface makes easy to include polynomial or interaction terms, such
-  terms often allow curve artefact (and are difficult to interpret), and
+  CCA is able to correct the curve artefact commonly found in
+  correspondence analysis by forcing the configuration into linear
+  constraints.  However, the curve artefact can be avoided only with a
+  low number of constraints that do not have a curvilinear relation with
+  each other.  The curve can reappear even with two badly chosen
+  constraints or a single factor.  Although the formula interface makes
+  easy to include polynomial or interaction terms, such terms often
+  produce curved artefacts (that are difficult to interpret), these
   should probably be avoided.
 
   According to folklore, \code{rda} should be used with ``short
@@ -128,9 +130,9 @@
   the effect of some
   conditioning or ``background'' or ``random'' variables or
   ``covariables'' before CCA proper.  In fact, pCCA compares models
-  \code{cca(X ~ z)} and \code{cca(X ~ y + z)} and attributes their
-  difference to the effect of \code{y} cleansed of the effect of
-  \code{z}.  Some people have used the method for extracting
+  \code{cca(X ~ Z)} and \code{cca(X ~ Y + Z)} and attributes their
+  difference to the effect of \code{Y} cleansed of the effect of
+  \code{Z}.  Some people have used the method for extracting
   ``components of variance'' in CCA.  However, if the effect of
   variables together is stronger than sum of both separately, this can
   increase total Chi-square after ``partialling out'' some
@@ -148,7 +150,7 @@
 
   Function \code{rda} returns an object of class \code{rda} which
   inherits from class \code{cca} and is described in \code{\link{cca.object}}.
-  The scaling used in \code{rda} scores is desribed in a separate
+  The scaling used in \code{rda} scores is described in a separate
   vignette with this package.
 }
 \references{ The original method was by ter Braak, but the current
@@ -158,15 +160,15 @@
   ed. Elsevier.
 
   McCune, B. (1997) Influence of noisy environmental data on canonical
-  correspondence analysis. \emph{Ecology} 78, 2617-2623.
+  correspondence analysis. \emph{Ecology} \strong{78}, 2617-2623.
   
   Palmer, M. W. (1993) Putting things in even better order: The
-  advantages of canonical correspondence analysis.  \emph{Ecology} 74,
-  2215-2230. 
+  advantages of canonical correspondence analysis.  \emph{Ecology}
+  \strong{74},2215-2230. 
   
   Ter Braak, C. J. F. (1986) Canonical Correspondence Analysis: a new
   eigenvector technique for multivariate direct gradient
-  analysis. \emph{Ecology} 67, 1167-1179.
+  analysis. \emph{Ecology} \strong{67}, 1167-1179.
   
 }
 \author{

Modified: pkg/man/cca.object.Rd
===================================================================
--- pkg/man/cca.object.Rd	2008-02-16 16:19:46 UTC (rev 218)
+++ pkg/man/cca.object.Rd	2008-02-16 16:22:40 UTC (rev 219)
@@ -15,7 +15,7 @@
 
 \value{
   A \code{cca} object has the following elements:
-  \item{call }{function call.}
+  \item{call }{the function call.}
   \item{colsum, rowsum }{Column and row sums in \code{cca}.  In
     \code{rda}, item \code{colsum} contains standard deviations of
     species and \code{rowsum} is \code{NA}.}
@@ -42,23 +42,23 @@
     component.
     Items \code{pCCA}, \code{CCA} and \code{CA} have similar
     structure, and contain following items:
-    \item{alias}{The names of the aliased constraints or conditions.
+    \item{\code{alias}}{The names of the aliased constraints or conditions.
       Function \code{\link{alias.cca}} does not access this item
       directly, but it finds the aliased variables and their defining
-      equations from the item \code{QR}.}
-    \item{biplot}{Biplot scores of constraints.  Only in \code{CCA}.}
+      equations from the \code{QR} item.}
+    \item{\code{biplot}}{Biplot scores of constraints.  Only in \code{CCA}.}
     \item{centroids}{(Weighted) centroids of factor levels of
       constraints. Only in \code{CCA}. Missing if the ordination was not
     called with \code{formula}.}
-    \item{eig}{Eigenvalues of axes. In \code{CCA} and \code{CA}.}
-    \item{envcentre}{(Weighted) means of the original constraining or
+    \item{\code{eig}}{Eigenvalues of axes. In \code{CCA} and \code{CA}.}
+    \item{\code{envcentre}}{(Weighted) means of the original constraining or
       conditioning variables. In \code{pCCA} and in \code{CCA}.}
-    \item{Fit}{The fitted values of standardized data matrix after
+    \item{\code{Fit}}{The fitted values of standardized data matrix after
       fitting conditions. Only in \code{pCCA}.}
-    \item{QR}{The QR decomposition of explanatory variables as produced
+    \item{\code{QR}}{The QR decomposition of explanatory variables as produced
       by \code{\link{qr}}. 
       The constrained ordination 
-      algorithm is based on \code{QR} decomposition of constraints and
+      algorithm is based on QR decomposition of constraints and
       conditions (environmental data).  The environmental data
       are first centred in \code{rda} or weighted and centred in
       \code{cca}.  The QR decomposition is used in many functions that
@@ -69,33 +69,33 @@
       \code{\link{predict.cca}}, \code{\link{predict.rda}},
       \code{\link{calibrate.cca}}.  For possible uses of this component,
       see \code{\link{qr}}. In \code{pCCA} and \code{CCA}.} 
-    \item{rank}{The rank of the component.}
-    \item{tot.chi}{Total inertia or the sum of all eigenvalues of the
+    \item{\code{rank}}{The rank of the component.}
+    \item{\code{tot.chi}}{Total inertia or the sum of all eigenvalues of the
       component.}
-    \item{u}{(Weighted) orthonormal site scores.  Please note that
+    \item{\code{u}}{(Weighted) orthonormal site scores.  Please note that
       scaled scores are not stored in the \code{cca} object, but they
       are made when the object is accessed with functions like
       \code{\link{scores.cca}}, \code{\link{summary.cca}} or
       \code{\link{plot.cca}}, or their \code{rda} variants.   Only in
-      \code{CCA} and \code{CA}.  In \code{CCA} component these are the
-      so-called linear combination scores. }
-    \item{u.eig}{\code{u} scaled by eigenvalues.  There is no guarantee
-      that any \code{.eig} variants of scores will be kept in the future
-      releases.}
-    \item{v}{(Weighted) orthonormal species scores.  If missing species
+      \code{CCA} and \code{CA}.  In the \code{CCA} component these are
+      the so-called linear combination scores. }
+    \item{\code{u.eig}}{\code{u} scaled by eigenvalues.  There is no
+      guarantee that any \code{.eig} variants of scores will be kept in
+      the future releases.}
+    \item{\code{v}}{(Weighted) orthonormal species scores.  If missing species
       were omitted from the analysis, this will contain
       attribute \code{\link{na.action}} that lists the
       omitted species. Only in \code{CCA} and \code{CA}.}
-    \item{v.eig}{\code{v} weighted by eigenvalues.}
-    \item{wa}{Site scores found as weighted averages (\code{cca}) or
+    \item{\code{v.eig}}{\code{v} weighted by eigenvalues.}
+    \item{\code{wa}}{Site scores found as weighted averages (\code{cca}) or
       weighted sums (\code{rda}) of 
       \code{v} with weights \code{Xbar}, but the multiplying effect of
       eigenvalues  removed. These often are known as WA scores in
       \code{cca}. Only in  \code{CCA}.}
-    \item{wa.eig}{The direct result of weighted avaraging or weighted
+    \item{\code{wa.eig}}{The direct result of weighted avaraging or weighted
       summation  (matrix multiplication)
       with the resulting eigenvalue inflation.}
-    \item{Xbar}{The standardized data matrix after previous stages of
+    \item{\code{Xbar}}{The standardized data matrix after previous stages of
       analysis. In \code{CCA} this is after possible \code{pCCA} or
       after partialling out the effects of conditions, and in \code{CA}
       after both \code{pCCA} and \code{CCA}. In \code{\link{cca}} the

Modified: pkg/man/decorana.Rd
===================================================================
--- pkg/man/decorana.Rd	2008-02-16 16:19:46 UTC (rev 218)
+++ pkg/man/decorana.Rd	2008-02-16 16:22:40 UTC (rev 219)
@@ -8,6 +8,7 @@
 \alias{scores.decorana}
 \alias{points.decorana}
 \alias{text.decorana}
+\concept{ordination}
 
 \title{Detrended Correspondence Analysis and Basic Reciprocal Averaging }
 \description{
@@ -15,24 +16,32 @@
   averaging or orthogonal correspondence analysis.
 }
 \usage{
-decorana(veg, iweigh=0, iresc=4, ira=0, mk=26, short=0, before=NULL,
-         after=NULL)
+decorana(veg, iweigh=0, iresc=4, ira=0, mk=26, short=0,
+         before=NULL, after=NULL)
+
 \method{plot}{decorana}(x, choices=c(1,2), origin=TRUE,
      display=c("both","sites","species","none"),
-     cex = 0.8, cols = c(1,2), type, xlim, ylim,...)
-\method{text}{decorana}(x, display = c("sites", "species"), labels, choices = 1:2, 
-     origin = TRUE, select,  ...)  
-\method{points}{decorana}(x, display = c("sites", "species"), choices = 1:2, 
-     origin = TRUE, select, ...)  
+     cex = 0.8, cols = c(1,2), type, xlim, ylim, ...)
+
+\method{text}{decorana}(x, display = c("sites", "species"), labels,
+     choices = 1:2, origin = TRUE, select,  ...)
+
+\method{points}{decorana}(x, display = c("sites", "species"),
+       choices=1:2, origin = TRUE, select, ...)
+
 \method{summary}{decorana}(object, digits=3, origin=TRUE,
-     display=c("both", "species","sites","none"), ...)
+        display=c("both", "species","sites","none"), ...)
+
 \method{print}{summary.decorana}(x, head = NA, tail = head, ...)
+
 downweight(veg, fraction = 5)
-\method{scores}{decorana}(x, display=c("sites","species"), choices =1:4, origin=TRUE, ...)
+
+\method{scores}{decorana}(x, display=c("sites","species"), choices=1:4,
+       origin=TRUE, ...)
 }
 
 \arguments{
-  \item{veg}{Community data matrix. }
+  \item{veg}{Community data, a matrix-like object. }
   \item{iweigh}{Downweighting of rare species (0: no). }
   \item{iresc}{Number of rescaling cycles (0: no rescaling). }
   \item{ira}{Type of analysis (0: detrended, 1: basic reciprocal averaging). }
@@ -58,40 +67,39 @@
   \item{head, tail}{Number of rows printed from the head and tail of
     species and site scores. Default \code{NA} prints all.}
   \item{fraction}{Abundance fraction where downweighting begins.}
-  \item{...}{Other parameters for \code{plot} function.}
+  \item{...}{Other arguments for \code{plot} function.}
   }
 
 \details{
   In late 1970s, correspondence analysis became the method of choice for
-  ordination in vegetation science, since it seemed to be able to better cope
-  with non-linear species responses  than principal components
-  analysis.  However, even correspondence analysis produced arc-shaped
-  configuration of a single gradient.  Mark Hill developed
-  detrended correspondence analysis to correct two assumed `faults' in
+  ordination in vegetation science, since it seemed better able to cope 
+  with non-linear species responses than principal components
+  analysis. However, even correspondence analysis can produc an arc-shaped
+  configuration of a single gradient. Mark Hill developed detrended
+  correspondence analysis to correct two assumed `faults' in 
   correspondence analysis: curvature of straight gradients and packing
   of sites at the ends of the gradient.  
 
   The curvature is removed by replacing the orthogonalization of axes
-  with detrending.  In orthogonalization the successive axes are made
+  with detrending.  In orthogonalization successive axes are made
   non-correlated, but detrending should remove all systematic dependence
-  between axes.  Detrending is made using a five-segment smoothing
-  window with weights (1,2,3,2,1) on \code{mk} segments -- which indeed
+  between axes.  Detrending is performed using a five-segment smoothing
+  window with weights (1,2,3,2,1) on \code{mk} segments --- which indeed
   is more robust than the suggested alternative of detrending by
-  polynomials. The
-  packing of sites at the ends of the gradient is undone by rescaling
-  the axes after extraction.  After rescaling, the axis is supposed to be
-  scaled by `SD' units, so that the average width of Gaussian species
-  responses is supposed to be one over whole axis.  Other innovations
-  were the piecewise linear transformation of species abundances and
-  downweighting of rare species which were regarded to have an
-  unduly high influence on ordination axes.  
+  polynomials. The packing of sites at the ends of the gradient is
+  undone by rescaling the axes after extraction.  After rescaling, the
+  axis is supposed to be scaled by `SD' units, so that the average width
+  of Gaussian species responses is supposed to be one over whole axis.
+  Other innovations were the piecewise linear transformation of species
+  abundances and downweighting of rare species which were regarded to
+  have an unduly high influence on ordination axes.
 
   It seems that detrending actually works by twisting the ordination
   space, so that the results look non-curved in two-dimensional projections
-  (`lolly paper effect').  As a result, the points have usually an
-  easily recognized triangle or diamond shaped pattern, obviously as a
-  detrending artefact.  Rescaling works differently than commonly
-  presented, too.  \code{Decorana} does not use, or even evaluate, the
+  (`lolly paper effect').  As a result, the points usually have an
+  easily recognized triangular or diamond shaped pattern, obviously an
+  artefact of detrendingt.  Rescaling works differently than commonly
+  presented, too. \code{decorana} does not use, or even evaluate, the
   widths of species responses.  Instead, it tries to equalize the
   weighted variance of species scores on axis segments (parameter
   \code{mk} has only a small effect, since \code{decorana} finds the
@@ -100,84 +108,84 @@
   model, where all species initially have unit width responses and
   equally spaced modes.
 
-  Function \code{summary} prints the ordination scores,
+  The \code{summary} method prints the ordination scores,
   possible prior weights used in downweighting, and the marginal totals
-  after applying these weights. Function \code{plot} plots
+  after applying these weights. The \code{plot} method plots
   species and site scores.  Classical \code{decorana} scaled the axes
   so that smallest site score was 0 (and smallest species score was
   negative), but \code{summary}, \code{plot} and
-  \code{scores}  use the true origin, unless \code{origin = FALSE}.
+  \code{scores} use the true origin, unless \code{origin = FALSE}.
 
   In addition to proper eigenvalues, the function also reports `decorana
-  values' in detrended analysis. These are the values that the legacy
-  code of \code{decorana} returns as `eigenvalues'.
-  They are estimated internally during
-  iteration, and it seems that detrending interferes the estimation
-  so that these values are generally too low and have unclear
-  interpretation. Moreover, `decorana values' are estimated before
-  rescaling which will change the eigenvalues. The
-  proper eigenvalues are estimated after extraction of the axes and
-  they are the ratio of biased weighted variances of site and
-  species scores even in detrended and rescaled solutions. The
+  values' in detrended analysis. These `decorana values' are the values
+  that the legacy code of \code{decorana} returns as `eigenvalues'.
+  They are estimated internally during iteration, and it seems that
+  detrending interferes the estimation so that these values are
+  generally too low and have unclear interpretation. Moreover, `decorana
+  values' are estimated before rescaling which will change the
+  eigenvalues. The proper eigenvalues are estimated after extraction of
+  the axes and they are the ratio of biased weighted variances of site
+  and species scores even in detrended and rescaled solutions. The
   `decorana values' are provided only for the the compatibility with
   legacy software, and they should not be used.
 }
 \value{
-  Function returns an object of class \code{decorana}, which has
+  \code{decorana} returns an object of class \code{"decorana"}, which has
   \code{print}, \code{summary} and \code{plot} methods.
 }
 \references{
   Hill, M.O. and Gauch, H.G. (1980). Detrended correspondence analysis:
-  an improved ordination technique. \emph{Vegetatio} 42, 47--58.
+  an improved ordination technique. \emph{Vegetatio} \strong{42},
+  47--58.
 
   Oksanen, J. and Minchin, P.R. (1997). Instability of ordination
   results under changes in input data order: explanations and
-  remedies. \emph{Journal of Vegetation Science} 8, 447--454.
+  remedies. \emph{Journal of Vegetation Science} \strong{8}, 447--454.
 }
-\author{Mark O. Hill wrote the original Fortran code, \R port was by Jari
-  Oksanen.  }
+\author{Mark O. Hill wrote the original Fortran code, the \R port was by
+  Jari Oksanen. }
 \note{
-  Function \code{decorana} uses the central numerical engine of the
-  original Fortran code (which is in public domain), or about 1/3 of the
-  original program.  I have tried to implement the original behaviour,
-  although a great part of preparatory steps were written in \R
-  language, and may differ somewhat from the original code. However,
+  \code{decorana} uses the central numerical engine of the
+  original Fortran code (which is in the public domain), or about 1/3 of
+  the original program.  I have tried to implement the original
+  behaviour, although a great part of preparatory steps were written in
+  \R language, and may differ somewhat from the original code. However,
   well-known bugs are corrected and strict criteria used (Oksanen &
-  Minchin 1997).
+  Minchin 1997). 
 
-  Please
-  note that there really is no need for piecewise transformation or even
-  downweighting within \code{decorana}, since there are more powerful
-  and extensive alternatives in \R, but these options are included for
-  compliance with the original software.  If different fraction of
-  abundance is needed in downweighting, function \code{downweight} must
-  be applied before \code{decorana}.  Function \code{downweight} 
-  indeed can be applied prior to correspondence analysis, and so it can be
-  used together with \code{\link{cca}}, too.
+  Please note that there really is no need for piecewise transformation
+  or even downweighting within \code{decorana}, since there are more
+  powerful and extensive alternatives in \R, but these options are
+  included for compliance with the original software.  If a different
+  fraction of abundance is needed in downweighting, function
+  \code{downweight} must be applied before \code{decorana}.  Function
+  \code{downweight} indeed can be applied prior to correspondence
+  analysis, and so it can be used together with \code{\link{cca}}, too.
 
   The function finds only four axes: this is not easily changed.
 }
 
+\seealso{
+  For unconstrained ordination, non-metric multidimensional scaling in
+  \code{\link[MASS]{isoMDS}} may be more robust (see also
+  \code{\link{metaMDS}}).  Constrained (or
+  `canonical') correspondence analysis can be made with
+  \code{\link{cca}}.  Orthogonal correspondence analysis can be
+  made with \code{\link[multiv]{ca}}, or with \code{decorana} or
+  \code{\link{cca}}, but the scaling of results vary (and the one in
+  \code{decorana} correspondes to \code{scaling = -1} in
+  \code{\link{cca}}.).
+  See \code{\link{predict.decorana}} for adding new points to an
+  ordination.
+}
 
- \seealso{
-   For unconstrained ordination, non-metric multidimensional scaling in
-   \code{\link[MASS]{isoMDS}} may be more robust (see also
-   \code{\link{metaMDS}}).  Constrained (or
-   `canonical') correspondence analysis can be made with
-   \code{\link{cca}}.  Orthogonal correspondence analysis can be
-   made with \code{\link[multiv]{ca}}, or with \code{decorana} or
-   \code{\link{cca}}, but the scaling of results vary (and the one in
-   \code{decorana} correspondes to \code{scaling = -1} in
-   \code{\link{cca}}.).
-   See \code{\link{predict.decorana}} for adding new points to ordination. 
- }
-
 \examples{
 data(varespec)
 vare.dca <- decorana(varespec)
 vare.dca
 summary(vare.dca)
 plot(vare.dca)
+
 ### the detrending rationale:
 gaussresp <- function(x,u) exp(-(x-u)^2/2)
 x <- seq(0,6,length=15) ## The gradient
@@ -189,6 +197,7 @@
 plot(scores(decorana(pack, ira=1)), asp=1, type="b", main="CA")
 plot(scores(decorana(pack)), asp=1, type="b", main="DCA")
 plot(scores(cca(pack ~ x), dis="sites"), asp=1, type="b", main="CCA")
+
 ### Let's add some noise:
 noisy <- (0.5 + runif(length(pack)))*pack
 par(mfrow=c(2,1))
@@ -200,6 +209,6 @@
 plot(scores(decorana(noisy)), type="b", main="DCA", asp=1)
 plot(scores(cca(noisy ~ x), dis="sites"), asp=1, type="b", main="CCA")
 par(opar)
-  }
+}
 \keyword{ multivariate }
 

Modified: pkg/man/decostand.Rd
===================================================================
--- pkg/man/decostand.Rd	2008-02-16 16:19:46 UTC (rev 218)
+++ pkg/man/decostand.Rd	2008-02-16 16:22:40 UTC (rev 219)
@@ -8,14 +8,16 @@
 methods for community ecologists.
 }
 \usage{
-decostand(x, method, MARGIN, range.global, na.rm = FALSE)
+decostand(x, method, MARGIN, range.global, na.rm=FALSE)
+
 wisconsin(x)
 }
 
 \arguments{
-  \item{x}{Community data matrix.}
-  \item{method}{Standardization method.}
-  \item{MARGIN}{Margin, if default is not acceptable.}
+  \item{x}{Community data, a matrix-like object.}
+  \item{method}{Standardization method. See Details for available options.}
+  \item{MARGIN}{Margin, if default is not acceptable. \code{1} = rows,
+    and \code{2} = columns of \code{x}.}
   \item{range.global}{Matrix from which the range is found in
     \code{method = "range"}.  This allows using same ranges across
     subsets of data.  The dimensions of \code{MARGIN} must match with
@@ -28,21 +30,21 @@
   \itemize{
     \item \code{total}: divide by margin total (default \code{MARGIN = 1}).
     \item \code{max}: divide by margin maximum (default \code{MARGIN = 2}).
-    \item \code{freq}: divide by margin maximum and multiply by number of
-    non-zero items, so that the average of non-zero entries is one
-    (Oksanen 1983; default \code{MARGIN = 2}).
+    \item \code{freq}: divide by margin maximum and multiply by the
+    number of non-zero items, so that the average of non-zero entries is
+    one (Oksanen 1983; default \code{MARGIN = 2}).
     \item \code{normalize}: make margin sum of squares equal to one (default
     \code{MARGIN = 1}).
     \item \code{range}: standardize values into range 0 \dots 1 (default
     \code{MARGIN = 2}).  If all values are constant, they will be
     transformed to 0.
-    \item \code{standardize}: scale into zero mean and unit variance
+    \item \code{standardize}: scale \code{x} to zero mean and unit variance
     (default \code{MARGIN = 2}).
-    \item \code{pa}: scale into presence/absence scale (0/1).
+    \item \code{pa}: scale \code{x} to presence/absence scale (0/1).
     \item \code{chi.square}: divide by row sums and square root of
     column sums, and adjust for square root of matrix total
-    (Legendre & Gallagher 2001). When used with Euclidean
-    distance, the matrix should be similar to the  the
+    (Legendre & Gallagher 2001). When used with the Euclidean
+    distance, the distances should be similar to the the
     Chi-square distance used in correspondence analysis. However, the
     results from \code{\link{cmdscale}} would still differ, since
     CA is a weighted ordination method (default \code{MARGIN =
@@ -53,19 +55,18 @@
   Standardization, as contrasted to transformation, means that the
   entries are transformed relative to other entries.
 
-  All methods have a default margin.  \code{MARGIN=1} means rows (sites
-  in a
-  normal data set) and \code{MARGIN=2} means columns (species in a normal
-  data set).
+  All methods have a default margin. \code{MARGIN=1} means rows (sites
+  in a normal data set) and \code{MARGIN=2} means columns (species in a
+  normal data set).
 
   Command \code{wisconsin} is a shortcut to common Wisconsin double
   standardization where species (\code{MARGIN=2}) are first standardized
   by maxima (\code{max}) and then sites (\code{MARGIN=1}) by
   site totals (\code{tot}).
 
-  Most  standardization methods will give non-sense results with
+  Most standardization methods will give nonsense results with
   negative data entries that normally should not occur in the community
-  data.  If there are empty sites  or species (or constant with
+  data. If there are empty sites or species (or constant with
   \code{method =  "range"}), many standardization will change these into
   \code{NaN}.  
 }
@@ -79,12 +80,12 @@
 
 \references{
   Legendre, P. & Gallagher, E.D. (2001) Ecologically meaningful
-  transformations for ordination of species data. \emph{Oecologia} 129:
-  271--280.
+  transformations for ordination of species data. \emph{Oecologia}
+  \strong{129}; 271--280.
 
   Oksanen, J. (1983) Ordination of boreal heath-like vegetation with
   principal component analysis, correspondence analysis and
-  multidimensional scaling. \emph{Vegetatio} 52, 181--189.
+  multidimensional scaling. \emph{Vegetatio} \strong{52}; 181--189.
   }
 
 \examples{
@@ -92,10 +93,10 @@
 sptrans <- decostand(varespec, "max")
 apply(sptrans, 2, max)
 sptrans <- wisconsin(varespec)
+
 # Chi-square: Similar but not identical to Correspondence Analysis.
 sptrans <- decostand(varespec, "chi.square")
 plot(procrustes(rda(sptrans), cca(varespec)))
 }
 \keyword{ multivariate}%-- one or more ...
 \keyword{ manip }
-



More information about the Vegan-commits mailing list