[Rcpp-commits] r3290 - pkg/RcppEigen/vignettes

noreply at r-forge.r-project.org noreply at r-forge.r-project.org
Sun Nov 6 17:03:52 CET 2011


Author: edd
Date: 2011-11-06 17:03:51 +0100 (Sun, 06 Nov 2011)
New Revision: 3290

Modified:
   pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex
   pkg/RcppEigen/vignettes/code.R
Log:
now converted to JSS style with "poor man's sweave" and a pure latex document
we need to fix the float placement though


Modified: pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex
===================================================================
--- pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex	2011-11-06 16:00:16 UTC (rev 3289)
+++ pkg/RcppEigen/vignettes/RcppEigen-intro-jss.tex	2011-11-06 16:03:51 UTC (rev 3290)
@@ -4,7 +4,7 @@
 %\VignetteKeywords{linear algebra, template programming, C++, R, Rcpp}
 %\VignettePackage{RcppEigen}
 
-\usepackage{booktabs,flafter,bm}
+\usepackage{booktabs,flafter,bm,amsmath}
 
 \newcommand{\R}{\proglang{R}\ } % NB forces a space so not good before % fullstop etc.
 \newcommand{\Cpp}{\proglang{C++}\ }
@@ -554,6 +554,931 @@
 >
 \end{verbatim}
 
+
+
+\subsection{Determinant of the cross-product matrix}
+\label{sec:determinant}
+
+The ``D-optimal'' criterion for experimental design chooses the design
+that maximizes the determinant, $|\bm X^\prime\bm X|$, for the
+$n\times p$ model matrix (or Jacobian matrix), $\bm X$.  The
+determinant, $|\bm L|$, of the $p\times p$ lower Cholesky factor
+$\bm L$, defined so that $\bm L\bm L^\prime=\bm X^\prime\bm X$, is
+the product of its diagonal elements, as is the case for any
+triangular matrix.  By the properties of determinants,
+\begin{displaymath}
+  |\bm X^\prime\bm X|=|\bm L\bm L^\prime|=|\bm L|\,|\bm L^\prime|=|\bm L|^2
+\end{displaymath}
+
+Alternatively, if using the ``LDLt'' decomposition, $\bm L\bm D\bm
+L^\prime=\bm X^\prime\bm X$ where $\bm L$ is unit lower triangular and
+$\bm D$ is diagonal then $|\bm X^\prime\bm X|$ is the product of the
+diagonal elements of $\bm D$.  Because it is known that the diagonals of
+$\bm D$ must be non-negative, one often evaluates the logarithm of the
+determinant as the sum of the logarithms of the diagonal elements of
+$\bm D$.  Several options are shown in Figure~\ref{cholDet}.
+
+
+\begin{figure}[htb]
+  \begin{quote}
+    \noindent
+    \ttfamily
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{Lower}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{Map}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{MatrixXd}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{VectorXd}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hspace*{\fill}\\
+    \hlkwb{const\ }\hlstd{Map}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hlstd{\ \ \ }\hlopt{}\hlstd{}\hlkwd{A}\hlstd{}\hlopt{(}\hlstd{as}\hlopt{$<$}\hlstd{Map}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ $>$(}\hlstd{AA}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{n}\hlstd{}\hlopt{(}\hlstd{A}\hlopt{.}\hlstd{}\hlkwd{cols}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ \ }\hlstd{}\hlkwd{AtA}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{MatrixXd}\hlstd{}\hlopt{(}\hlstd{n}\hlopt{,\ }\hlstd{n}\hlopt{).}\hlstd{}\hlkwd{setZero}\hlstd{}\hlopt{().}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{selfadjointView}\hlopt{$<$}\hlstd{Lower}\hlopt{$>$().}\hlstd{}\hlkwd{rankUpdate}\hlstd{}\hlopt{(}\hlstd{A}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{Lmat}\hlstd{}\hlopt{(}\hlstd{AtA}\hlopt{.}\hlstd{}\hlkwd{llt}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{matrixL}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ double}\hlstd{\ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{detL}\hlstd{}\hlopt{(}\hlstd{Lmat}\hlopt{.}\hlstd{}\hlkwd{diagonal}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{prod}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ }\hlstd{}\hlkwd{Dvec}\hlstd{}\hlopt{(}\hlstd{AtA}\hlopt{.}\hlstd{}\hlkwd{ldlt}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{vectorD}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hspace*{\fill}\\
+    \hlkwa{return\ }\hlstd{List}\hlopt{::}\hlstd{}\hlkwd{create}\hlstd{}\hlopt{(}\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"d1"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{detL\ }\hlopt{{*}\ }\hlstd{detL}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"d2"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{Dvec}\hlopt{.}\hlstd{}\hlkwd{prod}\hlstd{}\hlopt{(),}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"ld"}\hlstd{}\hlopt{{]}\ =\ }\hlstd{Dvec}\hlopt{.}\hlstd{}\hlkwd{array}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{log}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{sum}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+    \normalfont
+    \normalsize
+  \end{quote}
+  \caption{\textbf{cholDetCpp}: Determinant of a cross-product using the Cholesky decomposition}
+  \label{cholDet}
+\end{figure}
+
+
+
+Note the use of the \code{array()} method in the calculation of the
+log-determinant.  Because the \code{log()} method applies to arrays,
+not to vectors or matrices, an array from \code{Dvec} has to be created
+before applying the \code{log()} method.
+
+
+\section{Least squares solutions}
+\label{sec:leastSquares}
+
+A common operation in statistical computing is calculating a least
+squares solution, $\widehat{\bm\beta}$, defined as
+\begin{displaymath}
+  \widehat{\bm\beta}=\arg\min_{\beta}\|\bm y-\bm X\bm\beta\|^2
+\end{displaymath}
+where the model matrix, $\bm X$, is $n\times p$ ($n\ge p$) and $\bm y$
+is an $n$-dimensional response vector.  There are several ways, based
+on matrix decompositions, to determine such a solution.  Earlier, two forms
+of the Cholesky decomposition were discussed: ``LLt'' and
+``LDLt'', which can both be used to solve for $\widehat{\bm\beta}$.  Other
+decompositions that can be used are the QR decomposition, with or
+without column pivoting, the singular value decomposition and the
+eigendecomposition of a symmetric matrix.
+
+Determining a least squares solution is relatively straightforward.
+However, statistical computing often requires additional information,
+such as the standard errors of the coefficient estimates.  Calculating
+these involves evaluating the diagonal elements of $\left(\bm
+  X^\prime\bm X\right)^{-1}$ and the residual sum of squares, $\|\bm
+y-\bm X\widehat{\bm\beta}\|^2$.
+
+\subsection{Least squares using the ``LLt'' Cholesky}
+\label{sec:LLtLeastSquares}
+
+\begin{figure}[tbh]
+  \begin{quote}
+    \noindent
+    \ttfamily
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{LLT}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{Lower}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{Map}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{MatrixXd}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{VectorXd}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hspace*{\fill}\\
+    \hlkwb{const\ }\hlstd{Map}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hlstd{\ \ \ }\hlopt{}\hlstd{}\hlkwd{X}\hlstd{}\hlopt{(}\hlstd{as}\hlopt{$<$}\hlstd{Map}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ $>$(}\hlstd{XX}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{Map}\hlopt{$<$}\hlstd{VectorXd}\hlopt{$>$}\hlstd{\ \ \ }\hlopt{}\hlstd{}\hlkwd{y}\hlstd{}\hlopt{(}\hlstd{as}\hlopt{$<$}\hlstd{Map}\hlopt{$<$}\hlstd{VectorXd}\hlopt{$>$\ $>$(}\hlstd{yy}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{n}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{rows}\hlstd{}\hlopt{()),\ }\hlstd{}\hlkwd{p}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{cols}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{LLT}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ }\hlstd{}\hlkwd{llt}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{MatrixXd}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{,\ }\hlstd{p}\hlopt{).}\hlstd{}\hlkwd{setZero}\hlstd{}\hlopt{().}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{selfadjointView}\hlopt{$<$}\hlstd{Lower}\hlopt{$>$().}\hlstd{}\hlkwd{rankUpdate}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ }\hlstd{}\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{llt}\hlopt{.}\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{y}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ }\hlstd{}\hlkwd{fitted}\hlstd{}\hlopt{(}\hlstd{X\ }\hlopt{{*}\ }\hlstd{betahat}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ }\hlstd{}\hlkwd{resid}\hlstd{}\hlopt{(}\hlstd{y\ }\hlopt{{-}\ }\hlstd{fitted}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{df}\hlstd{}\hlopt{(}\hlstd{n\ }\hlopt{{-}\ }\hlstd{p}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ double}\hlstd{\ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{s}\hlstd{}\hlopt{(}\hlstd{resid}\hlopt{.}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{()\ /\ }\hlstd{std}\hlopt{::}\hlstd{}\hlkwd{sqrt}\hlstd{}\hlopt{(}\hlstd{}\hlkwb{double}\hlstd{}\hlopt{(}\hlstd{df}\hlopt{)));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{s\ }\hlopt{{*}\ }\hlstd{llt}\hlopt{.}\hlstd{}\hlkwd{matrixL}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{MatrixXd}\hlopt{::}\hlstd{}\hlkwd{Identity}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{,\ }\hlstd{p}\hlopt{)).}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{colwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{return}\hlstd{\ \ \ \ \ }\hlkwa{}\hlstd{List}\hlopt{::}\hlstd{}\hlkwd{create}\hlstd{}\hlopt{(}\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"coefficients"}\hlstd{}\hlopt{{]}}\hlstd{\ \ \ }\hlopt{=\ }\hlstd{betahat}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"fitted.values"}\hlstd{}\hlopt{{]}}\hlstd{\ \ }\hlopt{=\ }\hlstd{fitted}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"residuals"}\hlstd{}\hlopt{{]}}\hlstd{\ \ \ \ \ \ }\hlopt{=\ }\hlstd{resid}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"s"}\hlstd{}\hlopt{{]}}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlopt{=\ }\hlstd{s}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"df.residual"}\hlstd{}\hlopt{{]}}\hlstd{\ \ \ \ }\hlopt{=\ }\hlstd{df}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"rank"}\hlstd{}\hlopt{{]}}\hlstd{\ \ \ \ \ \ \ \ \ \ \ }\hlopt{=\ }\hlstd{p}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{\textunderscore }\hlopt{{[}}\hlstd{}\hlstr{"Std.\ Error"}\hlstd{}\hlopt{{]}}\hlstd{\ \ \ \ \ }\hlopt{=\ }\hlstd{se}\hlopt{);}\hlstd{}\hspace*{\fill}
+    \normalfont
+    \normalsize
+  \end{quote}
+  \caption{\textbf{lltLSCpp}: Least squares using the Cholesky decomposition}
+  \label{lltLS}
+\end{figure}
+Figure~\ref{lltLS} shows a calculation of the least squares coefficient estimates
+(\code{betahat}) and the standard errors (\code{se}) through an
+``LLt'' Cholesky decomposition of the crossproduct of the model
+matrix, $\bm X$.  Next, the results from this calculation are compared
+to those from the \code{lm.fit} function in \proglang{R}
+(\code{lm.fit} is the workhorse function called by \code{lm} once the
+model matrix and response have been evaluated).
+\begin{verbatim}
+> lltLS <- cxxfunction(signature(XX = "matrix", yy = "numeric"), lltLSCpp, "RcppEigen")
+> data(trees, package="datasets")
+> str(lltFit <- with(trees, lltLS(cbind(1, log(Girth)), log(Volume))))
+List of 7
+ $ coefficients : num [1:2] -2.35 2.2
+ $ fitted.values: num [1:31] 2.3 2.38 2.43 2.82 2.86 ...
+ $ residuals    : num [1:31] 0.0298 -0.0483 -0.1087 -0.0223 0.0727 ...
+ $ s            : num 0.115
+ $ df.residual  : int 29
+ $ rank         : int 2
+ $ Std. Error   : num [1:2] 0.2307 0.0898
+> str(lmFit <- with(trees, lm.fit(cbind(1, log(Girth)), log(Volume))))
+List of 8
+ $ coefficients : Named num [1:2] -2.35 2.2
+  ..- attr(*, "names")= chr [1:2] "x1" "x2"
+ $ residuals    : num [1:31] 0.0298 -0.0483 -0.1087 -0.0223 0.0727 ...
+ $ effects      : Named num [1:31] -18.2218 2.8152 -0.1029 -0.0223 0.0721 ...
+  ..- attr(*, "names")= chr [1:31] "x1" "x2" "" "" ...
+ $ rank         : int 2
+ $ fitted.values: num [1:31] 2.3 2.38 2.43 2.82 2.86 ...
+ $ assign       : NULL
+ $ qr           :List of 5
+  ..$ qr   : num [1:31, 1:2] -5.57 0.18 0.18 0.18 0.18 ...
+  ..$ qraux: num [1:2] 1.18 1.26
+  ..$ pivot: int [1:2] 1 2
+  ..$ tol  : num 1e-07
+  ..$ rank : int 2
+  ..- attr(*, "class")= chr "qr"
+ $ df.residual  : int 29
+> for (nm in c("coefficients", "residuals", "fitted.values", "rank"))
++     stopifnot(all.equal(lltFit[[nm]], unname(lmFit[[nm]])))
+> stopifnot(all.equal(lltFit[["Std. Error"]],
++                     unname(coef(summary(lm(log(Volume) ~ log(Girth), trees)))[,2])))
+> 
+\end{verbatim}
+
+
+There are several aspects of the \proglang{C++} code in
+Figure~\ref{lltLS} worth mentioning.  The \code{solve} method for the
+\code{LLT} object evaluates, in this case, $\left(\bm X^\prime\bm
+  X\right)^{-1}\bm X^\prime\bm y$ but without actually evaluating the
+inverse.  The calculation of the residuals, $\bm y-\widehat{\bm y}$,
+can be written, as in \proglang{R}, as \code{y - fitted}. (But note
+that \pkg{Eigen} classes do not have a ``recycling rule as in
+\proglang{R}.  That is, the two vector operands must have the same
+length.)  The \code{norm()} method evaluates the square root of the
+sum of squares of the elements of a vector.  Although one does not
+explicitly evaluate $\left(\bm X^\prime\bm X\right)^{-1}$ one does
+evaluate $\bm L^{-1}$ to obtain the standard errors.  Note also the
+use of the \code{colwise()} method in the evaluation of the standard
+errors.  It applies a method to the columns of a matrix, returning a
+vector.  The \pkg{Eigen} \code{colwise()} and \code{rowwise()} methods
+are similar in effect to the \code{apply} function in \proglang{R}.
+
+In the descriptions of other methods for solving least squares
+problems, much of the code parallels that shown in
+Figure~\ref{lltLS}.  The redundant parts are omitted, and only
+the evaluation of the coefficients, the rank and the standard errors is shown.
+Actually, the standard errors are calculated only up to the scalar
+multiple of $s$, the residual standard error, in these code fragments.
+The calculation of the residuals and $s$ and the scaling of the
+coefficient standard errors is the same for all methods.  (See the
+files \code{fastLm.h} and \code{fastLm.cpp} in the \pkg{RcppEigen}
+source package for details.)
+
+
+\subsection{Least squares using the unpivoted QR decomposition}
+\label{sec:QR}
+
+A QR decomposition has the form
+\begin{displaymath}
+  \bm X=\bm Q\bm R=\bm Q_1\bm R_1
+\end{displaymath}
+where $\bm Q$ is an $n\times n$ orthogonal matrix, which means that
+$\bm Q^\prime\bm Q=\bm Q\bm Q^\prime=\bm I_n$, and the $n\times p$
+matrix $\bm R$ is zero below the main diagonal.  The $n\times p$
+matrix $\bm Q_1$ is the first $p$ columns of $\bm Q$ and the $p\times
+p$ upper triangular matrix $\bm R_1$ is the top $p$ rows of $\bm R$.
+There are three \pkg{Eigen} classes for the QR decomposition:
+\code{HouseholderQR} provides the basic QR decomposition using
+Householder transformations, \code{ColPivHouseholderQR} incorporates
+column pivots and \code{FullPivHouseholderQR} incorporates both row
+and column pivots.
+
+Figure~\ref{QRLS} shows a least squares solution using the unpivoted
+QR decomposition.
+% using Eigen::HouseholderQR;
+
+% const HouseholderQR<MatrixXd> QR(X);
+% const VectorXd           betahat(QR.solve(y));
+% const VectorXd            fitted(X * betahat);
+% const int                     df(n - p);
+% const VectorXd                se(QR.matrixQR().topRows(p).
+%                                  triangularView<Upper>().
+%                                  solve(MatrixXd::Identity(p,p)).
+%                                  rowwise().norm());
+\begin{figure}[htb]
+  \begin{quote}
+    \noindent
+    \ttfamily
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{HouseholderQR}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hspace*{\fill}\\
+    \hlkwb{const\ }\hlstd{HouseholderQR}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$\ }\hlstd{}\hlkwd{QR}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{QR}\hlopt{.}\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{y}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{fitted}\hlstd{}\hlopt{(}\hlstd{X\ }\hlopt{{*}\ }\hlstd{betahat}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{df}\hlstd{}\hlopt{(}\hlstd{n\ }\hlopt{{-}\ }\hlstd{p}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{QR}\hlopt{.}\hlstd{}\hlkwd{matrixQR}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{topRows}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{).}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{triangularView}\hlopt{$<$}\hlstd{Upper}\hlopt{$>$().}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{solve}\hlstd{}\hlopt{(}\hlstd{MatrixXd}\hlopt{::}\hlstd{}\hlkwd{Identity}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{,}\hlstd{p}\hlopt{)).}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+    \normalfont
+    \normalsize
+    \caption{\textbf{QRLSCpp}: Least squares using the unpivoted QR decomposition}
+    \label{QRLS}
+  \end{quote}
+\end{figure}
+The calculations in Figure~\ref{QRLS} are quite similar to those in
+Figure~\ref{lltLS}.  In fact, if one had extracted the upper
+triangular factor (the \code{matrixU()} method) from the \code{LLT}
+object in Figure~\ref{lltLS}, the rest of the code would be nearly
+identical.
+
+
+\subsection{Handling the rank-deficient case}
+\label{sec:rankdeficient}
+
+One important consideration when determining least squares solutions
+is whether $\rank(\bm X)$ is $p$, a situation described by saying
+that $\bm X$ has ``full column rank''.   When $\bm X$ does not have
+full column rank it is said to be ``rank deficient''.
+
+Although the theoretical rank of a matrix is well-defined, its
+evaluation in practice is not.  At best one can compute an effective
+rank according to some tolerance.  Decompositions that allow to
+estimation of the rank of the matrix in this way are said to be
+``rank-revealing''.
+
+Because the \code{model.matrix} function in \proglang{R} does a
+considerable amount of symbolic analysis behind the scenes, one usually
+ends up with full-rank model matrices.  The common cases of
+rank-deficiency, such as incorporating both a constant term and a full
+set of indicators columns for the levels of a factor, are eliminated.
+Other, more subtle, situations will not be detected at this stage,
+however.  A simple example occurs when there is a ``missing cell'' in a
+two-way layout and the interaction of the two factors is included in
+the model.
+
+\begin{verbatim}
+dd <- data.frame(f1 = gl(4, 6, labels = LETTERS[1:4]),
+                 f2 = gl(3, 2, labels = letters[1:3]))[-(7:8), ]
+xtabs(~ f2 + f1, dd)                    # one missing cell
+mm <- model.matrix(~ f1 * f2, dd)
+kappa(mm)         # large condition number, indicating rank deficiency
+rcond(mm)         # alternative evaluation, the reciprocal condition number
+(c(rank=qr(mm)$rank, p=ncol(mm))) # rank as computed in R's qr function
+set.seed(1)
+dd$y <- mm %*% seq_len(ncol(mm)) + rnorm(nrow(mm), sd = 0.1)
+                         # lm detects the rank deficiency
+fm1 <- lm(y ~ f1 * f2, dd)
+writeLines(capture.output(print(summary(fm1), signif.stars=FALSE))[9:22])
+\end{verbatim}
+
+The \code{lm} function for fitting linear models in \proglang{R} uses
+a rank-revealing form of the QR decomposition.  When the model matrix
+is determined to be rank deficient, according to the threshold used in
+\proglang{R}'s QR decomposition, the model matrix is reduced to
+$\rank{(\bm X)}$ columns by pivoting selected columns (those that are
+apparently linearly dependent on columns to their left) to the right
+hand side of the matrix.  A solution for this reduced model matrix is
+determined and the coefficients and standard errors for the redundant
+columns are flagged as missing.
+
+An alternative approach is to evaluate the ``pseudo-inverse'' of $\bm
+X$ from the singular value decomposition (SVD) of $\bm X$ or the
+eigendecomposition of $\bm X^\prime\bm X$.  The SVD is of the form
+\begin{displaymath}
+  \bm X=\bm U\bm D\bm V^\prime=\bm U_1\bm D_1\bm V^\prime
+\end{displaymath}
+where $\bm U$ is an orthogonal $n\times n$ matrix and $\bm U_1$ is its
+leftmost $p$ columns, $\bm D$ is $n\times p$ and zero off the main
+diagonal so that $\bm D_1$ is a $p\times p$ diagonal matrix with
+non-decreasing non-negative diagonal elements, and $\bm V$ is a $p\times
+p$ orthogonal matrix.  The pseudo-inverse of $\bm D_1$, written $\bm
+D_1^+$ is a $p\times p$ diagonal matrix whose first $r=\rank(\bm X)$
+diagonal elements are the inverses of the corresponding diagonal
+elements of $\bm D_1$ and whose last $p-r$ diagonal elements are zero.
+
+The tolerance for determining if an element of the diagonal of $\bm D$
+is considered to be (effectively) zero is a multiple of the largest
+singular value (i.e. the $(1,1)$ element of $\bm D$).
+
+In Figure~\ref{Dplus} a utility function, \code{Dplus}, is defined to
+return the pseudo-inverse as a diagonal matrix, given the singular
+values (the diagonal of $\bm D$) and the apparent rank.  To be able to
+use this function with the eigendecomposition where the eigenvalues
+are in increasing order, a Boolean argument \code{rev} is included
+indicating whether the order is reversed.
+
+\begin{figure}[htb]
+  \begin{quote}
+    \noindent
+    \ttfamily
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{DiagonalMatrix}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{Dynamic}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hspace*{\fill}\\
+    \hlkwc{inline\ }\hlstd{DiagonalMatrix}\hlopt{$<$}\hlstd{}\hlkwb{double}\hlstd{}\hlopt{,\ }\hlstd{Dynamic}\hlopt{$>$\ }\hlstd{}\hlkwd{Dplus}\hlstd{}\hlopt{(}\hlstd{}\hlkwb{const\ }\hlstd{ArrayXd}\hlopt{\&\ }\hlstd{D}\hlopt{,}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwb{int\ }\hlstd{r}\hlopt{,\ }\hlstd{}\hlkwb{bool\ }\hlstd{rev}\hlopt{=}\hlstd{}\hlkwa{false}\hlstd{}\hlopt{)\ \{}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ }\hlstd{VectorXd}\hlstd{\ \ \ }\hlstd{}\hlkwd{Di}\hlstd{}\hlopt{(}\hlstd{VectorXd}\hlopt{::}\hlstd{}\hlkwd{Constant}\hlstd{}\hlopt{(}\hlstd{D}\hlopt{.}\hlstd{}\hlkwd{size}\hlstd{}\hlopt{(),\ }\hlstd{}\hlnum{0}\hlstd{}\hlopt{.));}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ }\hlstd{}\hlkwa{if\ }\hlstd{}\hlopt{(}\hlstd{rev}\hlopt{)\ }\hlstd{Di}\hlopt{.}\hlstd{}\hlkwd{tail}\hlstd{}\hlopt{(}\hlstd{r}\hlopt{)}\hlstd{\ \ }\hlopt{=\ }\hlstd{D}\hlopt{.}\hlstd{}\hlkwd{tail}\hlstd{}\hlopt{(}\hlstd{r}\hlopt{).}\hlstd{}\hlkwd{inverse}\hlstd{}\hlopt{();}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ }\hlstd{}\hlkwa{else\ }\hlstd{Di}\hlopt{.}\hlstd{}\hlkwd{head}\hlstd{}\hlopt{(}\hlstd{r}\hlopt{)}\hlstd{\ \ \ \ \ \ }\hlopt{=\ }\hlstd{D}\hlopt{.}\hlstd{}\hlkwd{head}\hlstd{}\hlopt{(}\hlstd{r}\hlopt{).}\hlstd{}\hlkwd{inverse}\hlstd{}\hlopt{();}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ }\hlstd{}\hlkwa{return\ }\hlstd{DiagonalMatrix}\hlopt{$<$}\hlstd{}\hlkwb{double}\hlstd{}\hlopt{,\ }\hlstd{Dynamic}\hlopt{$>$(}\hlstd{Di}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlopt{\}}\hlstd{}\hspace*{\fill}
+    \normalfont
+    \normalsize
+  \end{quote}
+  \caption{\textbf{DplusCpp}: Create the diagonal matrix $\bm D^+$ from the array of singular values $\bm d$}
+  \label{Dplus}
+\end{figure}
+% using Eigen::DiagonalMatrix;
+% using Eigen::Dynamic;
+
+% inline DiagonalMatrix<double, Dynamic> Dplus(const ArrayXd& D,
+%                                              int r, bool rev=false) {
+%     VectorXd   Di(VectorXd::Constant(D.size(), 0.));
+%     if (rev) Di.tail(r)  = D.tail(r).inverse();
+%     else Di.head(r)      = D.head(r).inverse();
+%     return DiagonalMatrix<double, Dynamic>(Di);
+% }
+
+\subsection{Least squares using the SVD}
+\label{sec:SVDls}
+
+With these definitions the code for least squares using the singular
+value decomposition can be written as in Figure~\ref{SVDLS}.
+\begin{figure}[htb]
+  \begin{quote}
+    \noindent
+    \ttfamily
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{JacobiSVD}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hspace*{\fill}\\
+    \hlkwb{const\ }\hlstd{JacobiSVD}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{UDV}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{jacobiSvd}\hlstd{}\hlopt{(}\hlstd{Eigen}\hlopt{::}\hlstd{ComputeThinU}\hlopt{\textbar }\hlstd{Eigen}\hlopt{::}\hlstd{ComputeThinV}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{ArrayXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{D}\hlstd{}\hlopt{(}\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{singularValues}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{r}\hlstd{}\hlopt{((}\hlstd{D\ }\hlopt{$>$\ }\hlstd{D}\hlopt{{[}}\hlstd{}\hlnum{0}\hlstd{}\hlopt{{]}\ {*}\ }\hlstd{}\hlkwd{threshold}\hlstd{}\hlopt{()).}\hlstd{}\hlkwd{count}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{VDp}\hlstd{}\hlopt{(}\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{matrixV}\hlstd{}\hlopt{()\ {*}\ }\hlstd{}\hlkwd{Dplus}\hlstd{}\hlopt{(}\hlstd{D}\hlopt{,\ }\hlstd{r}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{VDp\ }\hlopt{{*}\ }\hlstd{UDV}\hlopt{.}\hlstd{}\hlkwd{matrixU}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{y}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{df}\hlstd{}\hlopt{(}\hlstd{n\ }\hlopt{{-}\ }\hlstd{r}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{s\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+    \normalfont
+    \normalsize
+  \end{quote}
+  \caption{\textbf{SVDLSCpp}: Least squares using the SVD}
+  \label{SVDLS}
+\end{figure}
+% using Eigen::JacobiSVD;
+
+% const JacobiSVD<MatrixXd>
+%                 UDV(X.jacobiSvd(Eigen::ComputeThinU|Eigen::ComputeThinV));
+% const ArrayXd               D(UDV.singularValues());
+% const int                   r((D > D[0] * threshold()).count());
+% const MatrixXd            VDp(UDV.matrixV() * Dplus(D, r));
+% const VectorXd        betahat(VDp * UDV.matrixU().adjoint() * y);
+% const int                  df(n - r);
+% const VectorXd             se(s * VDp.rowwise().norm());
+In the rank-deficient case this code will produce a complete set of
+coefficients and their standard errors.  It is up to the user to note
+that the rank is less than $p$, the number of columns in $\bm X$, and
+hence that the estimated coefficients are just one of an infinite
+number of coefficient vectors that produce the same fitted values.  It
+happens that this solution is the minimum norm solution.
+
+The standard errors of the coefficient estimates in the rank-deficient
+case must be interpreted carefully.  The solution with one or more missing
+coefficients, as returned by the \code{lm.fit} function in
+\proglang{R} and by the column-pivoted QR decomposition described in
+Section~\ref{sec:colPivQR}, does not provide standard errors for the
+missing coefficients.  That is, both the coefficient and its standard
+error are returned as \code{NA} because the least squares solution is
+performed on a reduced model matrix.  It is also true that the
+solution returned by the SVD method is with respect to a reduced model
+matrix but the $p$ coefficient estimates and their $p$ standard errors
+don't show this.  They are, in fact, linear combinations of a set of
+$r$ coefficient estimates and their standard errors.
+
+\subsection{Least squares using the eigendecomposition}
+\label{sec:eigendecomp}
+
+The eigendecomposition of $\bm X^\prime\bm X$ is defined as
+\begin{displaymath}
+  \bm X^\prime\bm X=\bm V\bm\Lambda\bm V^\prime
+\end{displaymath}
+where $\bm V$, the matrix of eigenvectors, is a $p\times p$ orthogonal
+matrix and $\bm\Lambda$ is a $p\times p$ diagonal matrix with
+non-increasing, non-negative diagonal elements, called the eigenvalues
+of $\bm X^\prime\bm X$.  When the eigenvalues are distinct this $\bm
+V$ is the same as that in the SVD.  Also, the eigenvalues of $\bm
+X^\prime\bm X$ are the squares of the singular values of $\bm X$.
+
+With these definitions one can adapt much of the code from the SVD
+method for the eigendecomposition, as shown in Figure~\ref{SymmEigLS}.
+\begin{figure}[htb]
+  \begin{quote}
+    \noindent
+    \ttfamily
+    \hlstd{}\hlkwa{using\ }\hlstd{Eigen}\hlopt{::}\hlstd{SelfAdjointEigenSolver}\hlopt{;}\hspace*{\fill}\\
+    \hlstd{}\hspace*{\fill}\\
+    \hlkwb{const\ }\hlstd{SelfAdjointEigenSolver}\hlopt{$<$}\hlstd{MatrixXd}\hlopt{$>$}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{VLV}\hlstd{}\hlopt{(}\hlstd{}\hlkwd{MatrixXd}\hlstd{}\hlopt{(}\hlstd{p}\hlopt{,\ }\hlstd{p}\hlopt{).}\hlstd{}\hlkwd{setZero}\hlstd{}\hlopt{()}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlopt{.}\hlstd{selfadjointView}\hlopt{$<$}\hlstd{Lower}\hlopt{$>$}\hspace*{\fill}\\
+    \hlstd{}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlopt{.}\hlstd{}\hlkwd{rankUpdate}\hlstd{}\hlopt{(}\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{ArrayXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{D}\hlstd{}\hlopt{(}\hlstd{eig}\hlopt{.}\hlstd{}\hlkwd{eigenvalues}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ int}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }\hlkwb{}\hlstd{}\hlkwd{r}\hlstd{}\hlopt{((}\hlstd{D\ }\hlopt{$>$\ }\hlstd{D}\hlopt{{[}}\hlstd{p\ }\hlopt{{-}\ }\hlstd{}\hlnum{1}\hlstd{}\hlopt{{]}\ {*}\ }\hlstd{}\hlkwd{threshold}\hlstd{}\hlopt{()).}\hlstd{}\hlkwd{count}\hlstd{}\hlopt{());}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{MatrixXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{VDp}\hlstd{}\hlopt{(}\hlstd{VLV}\hlopt{.}\hlstd{}\hlkwd{eigenvectors}\hlstd{}\hlopt{()\ {*}\ }\hlstd{}\hlkwd{Dplus}\hlstd{}\hlopt{(}\hlstd{D}\hlopt{.}\hlstd{}\hlkwd{sqrt}\hlstd{}\hlopt{(),}\hlstd{r}\hlopt{,}\hlstd{}\hlkwa{true}\hlstd{}\hlopt{));}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{betahat}\hlstd{}\hlopt{(}\hlstd{VDp\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{X}\hlopt{.}\hlstd{}\hlkwd{adjoint}\hlstd{}\hlopt{()\ {*}\ }\hlstd{y}\hlopt{);}\hspace*{\fill}\\
+    \hlstd{}\hlkwb{const\ }\hlstd{VectorXd}\hlstd{\ \ \ \ \ \ \ \ \ \ \ \ \ }\hlstd{}\hlkwd{se}\hlstd{}\hlopt{(}\hlstd{s\ }\hlopt{{*}\ }\hlstd{VDp}\hlopt{.}\hlstd{}\hlkwd{rowwise}\hlstd{}\hlopt{().}\hlstd{}\hlkwd{norm}\hlstd{}\hlopt{());}\hlstd{}\hspace*{\fill}
+    \normalfont
+    \normalsize
+  \end{quote}
+  \caption{\textbf{SymmEigLSCpp}: Least squares using the eigendecomposition}
+  \label{SymmEigLS}
+\end{figure}
+% using Eigen::SelfAdjointEigenSolver;
+
+% const SelfAdjointEigenSolver<MatrixXd>
+%                           VLV(MatrixXd(p, p).setZero()
+%                                             .selfadjointView<Lower>
+%                                             .rankUpdate(X.adjoint()));
[TRUNCATED]

To get the complete diff run:
    svnlook diff /svnroot/rcpp -r 3290


More information about the Rcpp-commits mailing list