[Rcpp-commits] r3316 - pkg/RcppEigen/vignettes
noreply at r-forge.r-project.org
noreply at r-forge.r-project.org
Wed Nov 9 22:14:09 CET 2011
Author: dmbates
Date: 2011-11-09 22:14:08 +0100 (Wed, 09 Nov 2011)
New Revision: 3316
Modified:
pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex
Log:
Clean up code fragments and some phrasings.
Modified: pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex
===================================================================
--- pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex 2011-11-09 02:31:17 UTC (rev 3315)
+++ pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex 2011-11-09 21:14:08 UTC (rev 3316)
@@ -164,7 +164,7 @@
allowing for both high performance and well structured code
representing high-level operations. \proglang{C++} code based on Eigen
is often more like \proglang{R} code, working on the ``whole object'',
-than compiled code in other languages where operations often must be
+than like compiled code in other languages where operations often must be
coded in loops.
As in many \proglang{C++} template libraries using template meta-programming
@@ -305,8 +305,8 @@
one generally incorporates the \proglang{C++} source code files in a package
and includes the line \code{LinkingTo: Rcpp, RcppEigen} in the package's
\code{DESCRIPTION} file. The \code{RcppEigen.package.skeleton} function
-provides a quick way of generating the skeleton of a package using
-\pkg{RcppEigen} facilities.
+provides a quick way of generating the skeleton of a package that will use
+\pkg{RcppEigen}.
The \code{cxxfunction} with the \code{"Rcpp"} or \code{"RcppEigen"}
plugins has the \code{as} and \code{wrap} functions already defined as
@@ -331,7 +331,6 @@
[3,] 3 6
> str(A)
int [1:3, 1:2] 1 2 3 4 5 6
->
\end{verbatim}
and, in Figure~\ref{trans}, the \code{transpose()} method for the
\code{Eigen::MatrixXi} class is used to return the transpose of the supplied matrix. The \proglang{R}
@@ -356,15 +355,15 @@
\label{trans}
\end{figure}
-The next \proglang{R} snippet below compiles and links the \proglang{C++} code
+The \proglang{R} snippet below compiles and links the \proglang{C++} code
segment. The actual work is done by the function \code{cxxfunction} from the \pkg{inline}
-package which compiled, links and loads code written in \proglang{C++} and
-supplie as text variable. This frees the user from having to know about
+package which compiles, links and loads code written in \proglang{C++} and
+supplied as a character variable. This frees the user from having to know about
compiler and linker details and options, which makes ``exploratory
programming'' much easier. Here the program piece to be compiled is stored
-as text in a variable named \code{transCpp}), and \code{cxxfunction} createan
+as a character variable named \code{transCpp}, and \code{cxxfunction} creates
an executable function which is assigned to \code{ftrans}. This new function
-is used on the matrix $A$ used above, and one can check that it works as intended
+is used on the matrix $\bm A$ shown above, and one can check that it works as intended
by comparing the output to an explicit transpose of the matrix argument.
\begin{verbatim}
> ftrans <- cxxfunction(signature(AA="matrix"), transCpp, plugin="RcppEigen")
@@ -376,10 +375,10 @@
\end{verbatim}
For numeric or integer matrices the \code{adjoint()} method is
-equivalent to the \code{transpose()} method. For complex matrices, the
-adjoint is the conjugate of the transpose. In keeping with the
-conventions in the \pkg{Eigen} documentation, in what follows,
-the \code{adjoint()} method is used to create the transpose of numeric or
+equivalent to the \code{transpose()} method. For complex matrices,
+the adjoint is the conjugate of the transpose. In keeping with the
+conventions in the \pkg{Eigen} documentation the \code{adjoint()}
+method is used in what follows to create the transpose of numeric or
integer matrices.
@@ -419,14 +418,10 @@
$ B %*% C : int [1:2, 1:3] 21 32 13 20 5 8
$ crossprod(B, C): int [1:2, 1:3] 16 38 10 24 4 10
> stopifnot(all.equal(fp[[1]], B %*% C), all.equal(fp[[2]], crossprod(B, C)))
->
\end{verbatim}
+Note that the \code{create} class method for \code{Rcpp::List}
+implicitly applies \code{Rcpp::wrap} to its arguments.
-Notice that the \code{create} method for the \pkg{Rcpp} class
-\code{List} implicitly applies \code{Rcpp::wrap} to its arguments.
-
-
-
\subsection{Crossproduct of a single matrix}
\label{sec:crossproduct}
@@ -443,10 +438,11 @@
The function \code{tcrossprod} evaluates \code{crossprod(t(X))}
without actually forming the transpose.
-To express these calculations in Eigen, a \code{SelfAdjointView} is created,
+To express these calculations in Eigen, a \code{SelfAdjointView},
which is a dense matrix of which only one triangle is used, the other
-triangle being inferred from the symmetry. (``Self-adjoint'' is
-equivalent to symmetric for non-complex matrices.)
+triangle being inferred from the symmetry, is created. (The
+characterization ``self-adjoint'' is equivalent to symmetric for
+non-complex matrices.)
The \pkg{Eigen} class name is \code{SelfAdjointView}. The method for
general matrices that produces such a view is called
@@ -493,7 +489,6 @@
$ tcrossprod(A): int [1:3, 1:3] 17 22 27 22 29 36 27 36 45
> stopifnot(all.equal(crp[[1]], crossprod(A)),
+ all.equal(crp[[2]], tcrossprod(A)))
->
\end{verbatim}
To some, the expressions in Figure~\ref{crossprod} to construct
@@ -512,12 +507,12 @@
symmetrized before it is returned.
To cut down on repetition of \code{using} statements we gather them in
-a character string that will be given as the \code{includes} argument
+a character variable, \code{incl}, that will be given as the \code{includes} argument
in the calls to \code{cxxfunction}. We also define a utility
-function, \code{AtA}, that returns the crossproduct matrix.
+function, \code{AtA}, that returns the crossproduct matrix as shown in Figure~\ref{fig:incl}
-
-\begin{quote}
+\begin{figure}[htb]
+ %\begin{quote}
\noindent
\ttfamily
\hlstd{}\hlkwa{using}\hlstd{\ \ \ }\hlkwa{}\hlstd{Eigen}\hlopt{::}\hlstd{LLT}\hlopt{;}\hspace*{\fill}\\
@@ -538,9 +533,11 @@
\mbox{}
\normalfont
\normalsize
-\end{quote}
+%\end{quote}
+ \caption{The contents of the character vector, \code{incl}, that will preface \proglang{C++} code segments that follow.}
+ \label{fig:incl}
+\end{figure}
-
\subsection{Cholesky decomposition of the crossprod}
\label{sec:chol}
@@ -555,18 +552,11 @@
R$ is simply the transpose of $\bm L$ from the ``LLt'' form.
The templated \pkg{Eigen} classes for the LLt and LDLt forms are
-called \code{LLT} and \code{LDLT}. In general, one would preserve the
-objects from these classes in order to re-use them for solutions of
-linear systems. For a simple illustration, the matrix $\bm L$
-from the ``LLt'' form is returned.
-
-Because the Cholesky decomposition involves taking square roots, the internal
-representation is switched to numeric matrices
-\begin{verbatim}
+called \code{LLT} and \code{LDLT} and the corresponding methods are
+\code{.llt()} and \code{.ldlt()}.
+Because the Cholesky decomposition involves taking square roots, we
+pass a numeric matrix, $\bm A$, not an integer matrix.
storage.mode(A) <- "double"
-\end{verbatim}
-before applying the code in Figure~\ref{chol}.
-
\begin{figure}[htb]
%\begin{quote}
\noindent
@@ -583,6 +573,7 @@
\end{figure}
\begin{verbatim}
+> storage.mode(A) <- "double"
> fchol <- cxxfunction(signature(AA = "matrix"), cholCpp, "RcppEigen", incl)
> (ll <- fchol(A))
$L
@@ -596,7 +587,6 @@
[2,] 0.00000 1.96396
> stopifnot(all.equal(ll[[2]], chol(crossprod(A))))
->
\end{verbatim}
@@ -618,32 +608,24 @@
Alternatively, if using the ``LDLt'' decomposition, $\bm L\bm D\bm
L^\prime=\bm X^\prime\bm X$ where $\bm L$ is unit lower triangular and
$\bm D$ is diagonal then $|\bm X^\prime\bm X|$ is the product of the
-diagonal elements of $\bm D$. Because it is known that the diagonals of
-$\bm D$ must be non-negative, one often evaluates the logarithm of the
-determinant as the sum of the logarithms of the diagonal elements of
-$\bm D$. Several options are shown in Figure~\ref{cholDet}.
+diagonal elements of $\bm D$. Because it is known that the diagonal
+elements of $\bm D$ must be non-negative, one often evaluates the
+logarithm of the determinant as the sum of the logarithms of the
+diagonal elements of $\bm D$. Several options are shown in
+Figure~\ref{cholDet}.
-
\begin{figure}[htb]
%\begin{quote}
\noindent
\ttfamily
- \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{Lower}\hlopt{;}\hspace*{\fill}\\
- \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{Map}\hlopt{;}\hspace*{\fill}\\
- \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{MatrixXd}\hlopt{;}\hspace*{\fill}\\
- \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{VectorXd}\hlopt{;}\hspace*{\fill}\\
- \hlstd{}\hspace*{\fill}\\
- \hlkwb{const\ }\hlstd{Map}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hlstd{\ \ \ }\hlopt{}\hlstd{}\hlkwd{A}\hlstd{}\hlopt{(}\hlstd{as}\hlopt{$<$}\hlstd{Map}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ $>$(}\hlstd{AA}\hlopt{));}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{n}\hlstd{}\hlopt{(}\hlstd{A}\hlopt{.}\hlstd{}\hlkwd{cols}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ \ }\hlstd{}\hlkwd{AtA}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{MatrixXd}\hlstd{}\hlopt{(}\hlstd{n}\hlopt{,\ }\hlstd{n}\hlopt{).}\hlstd{}\hlkwd{setZero}\hlstd{}\hlopt{().}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{selfadjointView}\hlopt{$<$}\hlstd{Lower}\hlopt{$>$().}\hlstd{}\hlkwd{rankUpdate}\hlstd{}\hlopt{(}\hlstd{A}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()));}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{Lmat}\hlstd{}\hlopt{(}\hlstd{AtA}\hlopt{.}\hlstd{}\hlkwd{llt}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{matrixL}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ \ }\hlstd{}\hlkwd{ata}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{AtA}\hlstd{}\hlopt{(}\hlstd{as}\hlopt{$<$}\hlstd{MapMatd}\hlopt{$>$(}\hlstd{AA}\hlopt{)));}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{Lmat}\hlstd{}\hlopt{(}\hlstd{ata}\hlopt{.}\hlstd{}\hlkwd{llt}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{matrixL}\hlstd{}\hlopt{());}\hspace*{\fill}\\
\hlstd{}\hlkwb{const\ double}\hlstd{\ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{detL}\hlstd{}\hlopt{(}\hlstd{Lmat}\hlopt{.}\hlstd{}\hlkwd{diagonal}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{prod}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{Dvec}\hlstd{}\hlopt{(}\hlstd{AtA}\hlopt{.}\hlstd{}\hlkwd{ldlt}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{vectorD}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hspace*{\fill}\\
- \hlkwa{return\ }\hlstd{List}\hlopt{::}\hlstd{}\hlkwd{create}\hlstd{}\hlopt{(}\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"d1"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{detL\ }\hlopt{{*}\ }\hlstd{detL}\hlopt{,}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{Dvec}\hlstd{}\hlopt{(}\hlstd{ata}\hlopt{.}\hlstd{}\hlkwd{ldlt}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{vectorD}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+ \hlstd{}\hlkwa{return\ }\hlstd{List}\hlopt{::}\hlstd{}\hlkwd{create}\hlstd{}\hlopt{(}\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"d1"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{detL\ }\hlopt{{*}\ }\hlstd{detL}\hlopt{,}\hspace*{\fill}\\
\hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"d2"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{Dvec}\hlopt{.}\hlstd{}\hlkwd{prod}\hlstd{}\hlopt{(),}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"ld"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{Dvec}\hlopt{.}\hlstd{}\hlkwd{array}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{log}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{sum}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+ \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"ld"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{Dvec}\hlopt{.}\hlstd{}\hlkwd{array}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{log}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{sum}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}\\
+ \mbox{}
\normalfont
\normalsize
%\end{quote}
@@ -661,7 +643,7 @@
Note the use of the \code{array()} method in the calculation of the
log-determinant. Because the \code{log()} method applies to arrays,
-not to vectors or matrices, an array from \code{Dvec} has to be created
+not to vectors or matrices, an array must be created from \code{Dvec}
before applying the \code{log()} method.
\section{Least squares solutions}
@@ -761,8 +743,7 @@
> for (nm in c("coefficients", "residuals", "fitted.values", "rank"))
+ stopifnot(all.equal(lltFit[[nm]], unname(lmFit[[nm]])))
> stopifnot(all.equal(lltFit[["Std. Error"]],
-+ unname(coef(summary(lm(log(Volume) ~ log(Girth), trees)))[,2])))
->
++ unname(coef(summary(lm(log(Volume) ~ log(Girth), trees)))[,2])))
\end{verbatim}
@@ -772,7 +753,7 @@
X\right)^{-1}\bm X^\prime\bm y$ but without actually evaluating the
inverse. The calculation of the residuals, $\bm y-\widehat{\bm y}$,
can be written, as in \proglang{R}, as \code{y - fitted}. (But note
-that \pkg{Eigen} classes do not have a ``recycling rule as in
+that \pkg{Eigen} classes do not have a ``recycling rule'' as in
\proglang{R}. That is, the two vector operands must have the same
length.) The \code{norm()} method evaluates the square root of the
sum of squares of the elements of a vector. Although one does not
@@ -818,13 +799,11 @@
% using Eigen::HouseholderQR;
% const HouseholderQR<MatrixXd> QR(X);
-% const VectorXd betahat(QR.solve(y));
-% const VectorXd fitted(X * betahat);
-% const int df(n - p);
-% const VectorXd se(QR.matrixQR().topRows(p).
-% triangularView<Upper>().
-% solve(MatrixXd::Identity(p,p)).
-% rowwise().norm());
+% const VectorXd betahat(QR.solve(y));
+% const VectorXd fitted(X * betahat);
+% const int df(n - p);
+% const VectorXd se(QR.matrixQR().topRows(p).triangularView<Upper>()
+% .solve(MatrixXd::Identity(p,p)).rowwise().norm());
\begin{figure}[htb]
%\begin{quote}
\noindent
@@ -832,13 +811,12 @@
\hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{HouseholderQR}\hlopt{;}\hspace*{\fill}\\
\hlstd{}\hspace*{\fill}\\
\hlkwb{const\ }\hlstd{HouseholderQR}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ }\hlstd{}\hlkwd{QR}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{);}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{QR}\hlopt{.}\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{y}\hlopt{));}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{fitted}\hlstd{}\hlopt{(}\hlstd{X\ }\hlopt{{*}\ }\hlstd{betahat}\hlopt{);}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{df}\hlstd{}\hlopt{(}\hlstd{n\ }\hlopt{{-}\ }\hlstd{p}\hlopt{);}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{QR}\hlopt{.}\hlstd{}\hlkwd{matrixQR}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{topRows}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{).}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{triangularView}\hlopt{$<$}\hlstd{Upper}\hlopt{$>$().}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{MatrixXd}\hlopt{::}\hlstd{}\hlkwd{Identity}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{,}\hlstd{p}\hlopt{)).}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd\ }\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{QR}\hlopt{.}\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{y}\hlopt{));}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ }\hlstd{}\hlkwd{fitted}\hlstd{}\hlopt{(}\hlstd{X\ }\hlopt{{*}\ }\hlstd{betahat}\hlopt{);}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{df}\hlstd{}\hlopt{(}\hlstd{n\ }\hlopt{{-}\ }\hlstd{p}\hlopt{);}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{QR}\hlopt{.}\hlstd{}\hlkwd{matrixQR}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{topRows}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{).}\hlstd{triangularView}\hlopt{$<$}\hlstd{Upper}\hlopt{$>$()}\hspace*{\fill}\\
+ \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlopt{.}\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{MatrixXd}\hlopt{::}\hlstd{}\hlkwd{Identity}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{,}\hlstd{p}\hlopt{)).}\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}\\
+ \mbox{}
\normalfont
\normalsize
\caption{\textbf{QRLSCpp}: Least squares using the unpivoted QR decomposition}
@@ -886,9 +864,9 @@
b 2 2 2 2
c 2 2 2 2
> mm <- model.matrix(~ f1 * f2, dd)
-> kappa(mm) # large condition number, indicating rank deficiency
+> kappa(mm) # large condition number, indicating rank deficiency
[1] 4.30923e+16
-> rcond(mm) # alternative evaluation, the reciprocal condition number
+> rcond(mm) # alternative evaluation, the reciprocal condition number
[1] 2.3206e-17
> (c(rank=qr(mm)$rank, p=ncol(mm))) # rank as computed in R's qr function
rank p
@@ -911,7 +889,6 @@
f1B:f2c NA NA NA NA
f1C:f2c 10.9613 0.1163 94.2 < 2e-16
f1D:f2c 12.0411 0.1163 103.5 < 2e-16
->
\end{verbatim}
The \code{lm} function for fitting linear models in \proglang{R} uses
@@ -933,13 +910,13 @@
where $\bm U$ is an orthogonal $n\times n$ matrix and $\bm U_1$ is its
leftmost $p$ columns, $\bm D$ is $n\times p$ and zero off the main
diagonal so that $\bm D_1$ is a $p\times p$ diagonal matrix with
-non-decreasing non-negative diagonal elements, and $\bm V$ is a $p\times
+non-increasing, non-negative diagonal elements, and $\bm V$ is a $p\times
p$ orthogonal matrix. The pseudo-inverse of $\bm D_1$, written $\bm
D_1^+$ is a $p\times p$ diagonal matrix whose first $r=\rank(\bm X)$
diagonal elements are the inverses of the corresponding diagonal
elements of $\bm D_1$ and whose last $p-r$ diagonal elements are zero.
-The tolerance for determining if an element of the diagonal of $\bm D$
+The tolerance for determining if an element of the diagonal of $\bm D_1$
is considered to be (effectively) zero is a multiple of the largest
singular value (i.e. the $(1,1)$ element of $\bm D$).
@@ -950,7 +927,7 @@
In Figure~\ref{Dplus} a utility function, \code{Dplus}, is defined to
return the diagonal of the pseudo-inverse, $\bm D_1^+$, as an array,
-given the singular values (the diagonal of $\bm D$) as an array.
+given the singular values (the diagonal of $\bm D_1$) as an array.
\begin{figure}[htb]
%\begin{quote}
@@ -966,7 +943,7 @@
\normalfont
\normalsize
%\end{quote}
- \caption{\textbf{plusCpp}: Create the diagonal $\bm di$ of the pseudo-inverse, $\bm D_1^+$, from the array of singular values, $\bm d$}
+ \caption{\textbf{DplusCpp}: Create the diagonal $\bm d^+$ of the pseudo-inverse, $\bm D_1^+$, from the array of singular values, $\bm d$}
\label{Dplus}
\end{figure}
% inline ArrayXd Dplus(const ArrayXd& d) {
@@ -978,7 +955,7 @@
Calculation of the maximum element of $\bm d$ (the method is called
\code{.maxCoeff()}) and the use of a \code{threshold()} function
-provides greater generality of the function. It can be used on the
+provides greater generality for the function. It can be used on the
eigenvalues of $\bm X^\prime\bm X$, as shown in
Sec.~\ref{sec:eigendecomp}, even though these are returned in
increasing order, as opposed to the decreasing order of the singular
@@ -993,16 +970,15 @@
%\begin{quote}
\noindent
\ttfamily
- \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{JacobiSVD}\hlopt{;}\hspace*{\fill}\\
- \hlstd{}\hspace*{\fill}\\
- \hlkwb{const\ }\hlstd{JacobiSVD}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{UDV}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{jacobiSvd}\hlstd{}\hlopt{(}\hlstd{Eigen}\hlopt{::}\hlstd{ComputeThinU}\hlopt{\textbar }\hlstd{Eigen}\hlopt{::}\hlstd{ComputeThinV}\hlopt{));}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{ArrayXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{D}\hlstd{}\hlopt{(}\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{singularValues}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{r}\hlstd{}\hlopt{((}\hlstd{D\ }\hlopt{$>$\ }\hlstd{D}\hlopt{{[}}\hlstd{}\hlnum{0}\hlstd{}\hlopt{{]}\ {*}\ }\hlstd{}\hlkwd{threshold}\hlstd{}\hlopt{()).}\hlstd{}\hlkwd{count}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{VDp}\hlstd{}\hlopt{(}\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{matrixV}\hlstd{}\hlopt{()\ {*}\ }\hlstd{}\hlkwd{Dplus}\hlstd{}\hlopt{(}\hlstd{D}\hlopt{,\ }\hlstd{r}\hlopt{));}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{VDp\ }\hlopt{{*}\ }\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{matrixU}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{y}\hlopt{);}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{df}\hlstd{}\hlopt{(}\hlstd{n\ }\hlopt{{-}\ }\hlstd{r}\hlopt{);}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{s\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+ \hlkwb{const\ }\hlstd{Eigen}\hlopt{::}\hlstd{JacobiSVD}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hspace*{\fill}\\
+ \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{UDV}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{jacobiSvd}\hlstd{}\hlopt{(}\hlstd{Eigen}\hlopt{::}\hlstd{ComputeThinU}\hlopt{\textbar }\hlstd{Eigen}\hlopt{::}\hlstd{ComputeThinV}\hlopt{));}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{ArrayXd}\hlstd{\ \ \ \ \ \ \ }\hlstd{}\hlkwd{Dp}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{Dplus}\hlstd{}\hlopt{(}\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{singularValues}\hlstd{}\hlopt{()));}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{r}\hlstd{}\hlopt{((}\hlstd{Dp\ }\hlopt{$>$\ }\hlstd{}\hlnum{0}\hlstd{}\hlopt{).}\hlstd{}\hlkwd{count}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{VDp}\hlstd{}\hlopt{(}\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{matrixV}\hlstd{}\hlopt{()\ {*}\ }\hlstd{Dp}\hlopt{.}\hlstd{}\hlkwd{matrix}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{asDiagonal}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd\ }\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{VDp\ }\hlopt{{*}\ }\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{matrixU}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{y}\hlopt{);}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{df}\hlstd{}\hlopt{(}\hlstd{n\ }\hlopt{{-}\ }\hlstd{r}\hlopt{);}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{s\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}\\
+ \mbox{}
\normalfont
\normalsize
%\end{quote}
@@ -1019,12 +995,13 @@
% const VectorXd betahat(VDp * UDV.matrixU().adjoint() * y);
% const int df(n - r);
% const VectorXd se(s * VDp.rowwise().norm());
-In the rank-deficient case this code will produce a complete set of
-coefficients and their standard errors. It is up to the user to note
-that the rank is less than $p$, the number of columns in $\bm X$, and
-hence that the estimated coefficients are just one of an infinite
-number of coefficient vectors that produce the same fitted values. It
-happens that this solution is the minimum norm solution.
+Even in the rank-deficient case this code will produce a complete set
+of coefficients and their standard errors. The user must be careful
+to check if the computed rank is less than $p$, the number of columns
+in $\bm X$, in which case the estimated coefficients are just one of
+an infinite number of coefficient vectors that produce the same fitted
+values. It happens that the solution from this pseudo-inverse is the
+minimum norm solution.
The standard errors of the coefficient estimates in the rank-deficient
case must be interpreted carefully. The solution with one or more missing
@@ -1048,10 +1025,11 @@
\end{displaymath}
where $\bm V$, the matrix of eigenvectors, is a $p\times p$ orthogonal
matrix and $\bm\Lambda$ is a $p\times p$ diagonal matrix with
-non-increasing, non-negative diagonal elements, called the eigenvalues
+non-decreasing, non-negative diagonal elements, called the eigenvalues
of $\bm X^\prime\bm X$. When the eigenvalues are distinct, this $\bm
-V$ is the same as that in the SVD and the eigenvalues of $\bm
-X^\prime\bm X$ are the squares of the singular values of $\bm X$.
+V$ is the same (up to reordering of the columns) as that in the SVD
+and the eigenvalues of $\bm X^\prime\bm X$ are the (reordered) squares
+of the singular values of $\bm X$.
With these definitions one can adapt much of the code from the SVD
method for the eigendecomposition, as shown in Figure~\ref{SymmEigLS}.
@@ -1059,17 +1037,13 @@
%\begin{quote}
\noindent
\ttfamily
- \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{SelfAdjointEigenSolver}\hlopt{;}\hspace*{\fill}\\
- \hlstd{}\hspace*{\fill}\\
- \hlkwb{const\ }\hlstd{SelfAdjointEigenSolver}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{VLV}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{MatrixXd}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{,\ }\hlstd{p}\hlopt{).}\hlstd{}\hlkwd{setZero}\hlstd{}\hlopt{()}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlopt{.}\hlstd{selfadjointView}\hlopt{$<$}\hlstd{Lower}\hlopt{$>$}\hspace*{\fill}\\
- \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlopt{.}\hlstd{}\hlkwd{rankUpdate}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()));}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{ArrayXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{D}\hlstd{}\hlopt{(}\hlstd{eig}\hlopt{.}\hlstd{}\hlkwd{eigenvalues}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{r}\hlstd{}\hlopt{((}\hlstd{D\ }\hlopt{$>$\ }\hlstd{D}\hlopt{{[}}\hlstd{p\ }\hlopt{{-}\ }\hlstd{}\hlnum{1}\hlstd{}\hlopt{{]}\ {*}\ }\hlstd{}\hlkwd{threshold}\hlstd{}\hlopt{()).}\hlstd{}\hlkwd{count}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{VDp}\hlstd{}\hlopt{(}\hlstd{VLV}\hlopt{.}\hlstd{}\hlkwd{eigenvectors}\hlstd{}\hlopt{()\ {*}\ }\hlstd{}\hlkwd{Dplus}\hlstd{}\hlopt{(}\hlstd{D}\hlopt{.}\hlstd{}\hlkwd{sqrt}\hlstd{}\hlopt{(),}\hlstd{r}\hlopt{,}\hlstd{}\hlkwa{true}\hlstd{}\hlopt{));}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{VDp\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{y}\hlopt{);}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{s\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+ \hlkwb{const\ }\hlstd{Eigen}\hlopt{::}\hlstd{SelfAdjointEigenSolver}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ }\hlstd{}\hlkwd{VLV}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{AtA}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{));}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{ArrayXd}\hlstd{\ \ \ \ \ \ \ }\hlstd{}\hlkwd{Dp}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{Dplus}\hlstd{}\hlopt{(}\hlstd{eig}\hlopt{.}\hlstd{}\hlkwd{eigenvalues}\hlstd{}\hlopt{()).}\hlstd{}\hlkwd{sqrt}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{r}\hlstd{}\hlopt{((}\hlstd{Dp\ }\hlopt{$>$\ }\hlstd{}\hlnum{0}\hlstd{}\hlopt{).}\hlstd{}\hlkwd{count}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{VDp}\hlstd{}\hlopt{(}\hlstd{VLV}\hlopt{.}\hlstd{}\hlkwd{eigenvectors}\hlstd{}\hlopt{()\ {*}\ }\hlstd{Dp}\hlopt{.}\hlstd{}\hlkwd{matrix}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{asDiagonal}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd\ }\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{VDp\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{y}\hlopt{);}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{s\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}\\
+ \mbox{}
\normalfont
\normalsize
%\end{quote}
@@ -1112,10 +1086,12 @@
X\bm P$ only.
In the rank-deficient case the straightforward calculation of the
-fitted values, as $\bm X\widehat{\bm\beta}$, cannot be used. One
-could do some complicated rearrangement of the columns of X and the
-coefficient estimates but it is conceptually (and computationally)
-easier to employ the relationship
+fitted values, as $\bm X\widehat{\bm\beta}$, cannot be used because
+some of the estimated coefficient, $\widehat{\bm\beta}$, are \code{NA}
+so the product will be a vector of \code{NA}'s. One could do some
+complicated rearrangement of the columns of X and the coefficient
+estimates but it is conceptually (and computationally) easier to
+employ the relationship
\begin{displaymath}
\widehat{\bm y} = \bm Q_1\bm Q_1^\prime\bm y=\bm Q
\begin{bmatrix}
@@ -1129,73 +1105,70 @@
%\begin{quote}
\noindent
\ttfamily
- \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{ColPivHouseholderQR}\hlopt{;}\hspace*{\fill}\\
- \hlstd{}\hlkwc{typedef\ }\hlstd{ColPivHouseholderQR}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$::}\hlstd{PermutationType}\hlstd{\ \ }\hlstd{Permutation}\hlopt{;}\hspace*{\fill}\\
+ \hlstd{}\hlkwc{typedef\ }\hlstd{Eigen}\hlopt{::}\hlstd{ColPivHouseholderQR}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hlstd{\ \ }\hlopt{}\hlstd{CPivQR}\hlopt{;}\hspace*{\fill}\\
+ \hlstd{}\hlkwc{typedef\ }\hlstd{CPivQR}\hlopt{::}\hlstd{PermutationType}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{Permutation}\hlopt{;}\hspace*{\fill}\\
\hlstd{}\hspace*{\fill}\\
- \hlkwb{const\ }\hlstd{ColPivHouseholderQR}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ }\hlstd{}\hlkwd{PQR}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{);}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ }\hlstd{Permutation}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{Pmat}\hlstd{}\hlopt{(}\hlstd{PQR}\hlopt{.}\hlstd{}\hlkwd{colsPermutation}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{r}\hlstd{}\hlopt{(}\hlstd{PQR}\hlopt{.}\hlstd{}\hlkwd{rank}\hlstd{}\hlopt{());}\hspace*{\fill}\\
- \hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{betahat}\hlopt{,\ }\hlstd{fitted}\hlopt{,\ }\hlstd{se}\hlopt{;}\hspace*{\fill}\\
+ \hlkwb{const\ }\hlstd{CPivQR}\hlstd{\ \ \ \ \ \ \ }\hlstd{}\hlkwd{PQR}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{);}\hspace*{\fill}\\
+ \hlstd{}\hlkwb{const\ }\hlstd{Permutation\ }\hlkwd{Pmat}\hlstd{}\hlopt{(}\hlstd{PQR}\hlopt{.}\hlstd{}\hlkwd{colsPermutation}\hlstd{}\hlopt{());}\hspace*{\fill}\\
[TRUNCATED]
To get the complete diff run:
svnlook diff /svnroot/rcpp -r 3316
More information about the Rcpp-commits
mailing list