[Gmm-commits] r182 - in pkg/gmm: . vignettes
noreply at r-forge.r-project.org
noreply at r-forge.r-project.org
Thu Feb 11 17:31:05 CET 2021
Author: chaussep
Date: 2021-02-11 17:31:04 +0100 (Thu, 11 Feb 2021)
New Revision: 182
Modified:
pkg/gmm/DESCRIPTION
pkg/gmm/vignettes/gmm_with_R.pdf
pkg/gmm/vignettes/gmm_with_R.rnw
Log:
changed the vignette builder and removed the dependence on jss.cls
Modified: pkg/gmm/DESCRIPTION
===================================================================
--- pkg/gmm/DESCRIPTION 2021-02-09 22:33:58 UTC (rev 181)
+++ pkg/gmm/DESCRIPTION 2021-02-11 16:31:04 UTC (rev 182)
@@ -8,7 +8,7 @@
Description: It is a complete suite to estimate models based on moment conditions. It includes the two step Generalized method of moments (Hansen 1982; <doi:10.2307/1912775>), the iterated GMM and continuous updated estimator (Hansen, Eaton and Yaron 1996; <doi:10.2307/1392442>) and several methods that belong to the Generalized Empirical Likelihood family of estimators (Smith 1997; <doi:10.1111/j.0013-0133.1997.174.x>, Kitamura 1997; <doi:10.1214/aos/1069362388>, Newey and Smith 2004; <doi:10.1111/j.1468-0262.2004.00482.x>, and Anatolyev 2005 <doi:10.1111/j.1468-0262.2005.00601.x>).
Depends: R (>= 2.10.0), sandwich
NeedsCompilation: yes
-Suggests: mvtnorm, car, stabledist, MASS, timeDate, timeSeries
+Suggests: knitr, mvtnorm, car, stabledist, MASS, timeDate, timeSeries
Imports: stats, methods, grDevices, graphics
License: GPL (>= 2)
-
+VignetteBuilder: knitr
Modified: pkg/gmm/vignettes/gmm_with_R.pdf
===================================================================
(Binary files differ)
Modified: pkg/gmm/vignettes/gmm_with_R.rnw
===================================================================
--- pkg/gmm/vignettes/gmm_with_R.rnw 2021-02-09 22:33:58 UTC (rev 181)
+++ pkg/gmm/vignettes/gmm_with_R.rnw 2021-02-11 16:31:04 UTC (rev 182)
@@ -1,24 +1,59 @@
-\documentclass[article, nojss]{jss}
+\documentclass[11pt,letterpaper]{article}
+\usepackage{amsthm}
+\usepackage[hmargin=2cm,vmargin=2.5cm]{geometry}
\newtheorem{theorem}{Theorem}
\newtheorem{col}{Corollary}
\newtheorem{lem}{Lemma}
+\usepackage[utf8x]{inputenc}
\newtheorem{ass}{Assumption}
-
+\usepackage{amsmath}
+\usepackage{verbatim}
+\usepackage[round]{natbib}
+\usepackage{amsfonts}
\usepackage{amssymb}
-\usepackage[utf8x]{inputenc}
-%% need no \usepackage{Sweave.sty}
-\SweaveOpts{keep.source=TRUE}
+\usepackage{graphicx}
+\usepackage{hyperref}
+\hypersetup{
+ colorlinks,
+ citecolor=black,
+ filecolor=black,
+ linkcolor=black,
+ urlcolor=black
+}
+\bibliographystyle{plainnat}
+\newcommand{\E}{\mathrm{E}}
+\newcommand{\diag}{\mathrm{diag}}
+\newcommand{\Prob}{\mathrm{Pr}}
+\newcommand{\Var}{\mathrm{Var}}
+\let\proglang=\textsf
+\newcommand{\pkg}[1]{{\fontseries{m}\fontseries{b}\selectfont #1}}
+\newcommand{\Vect}{\mathrm{Vec}}
+\newcommand{\Cov}{\mathrm{Cov}}
+\newcommand{\conP}{\overset{p}{\to}}
+\newcommand{\conD}{\overset{d}{\to}}
+\newcommand\Real{ \mathbb{R} }
+\newcommand\Complex{ \mathbb{C} }
+\newcommand\Natural{ \mathbb{N} }
+\newcommand\rv{{\cal R}}
+\newcommand\Q{\mathbb{Q}}
+\newcommand\PR{{\cal R}}
+\newcommand\T{{\cal T}}
+\newcommand\Hi{{\cal H}}
+\newcommand\La{{\cal L}}
+\newcommand\plim{plim}
+\renewcommand{\epsilon}{\varepsilon}
+
+\begin{document}
+
\author{Pierre Chauss\'e}
\title{Computing Generalized Method of Moments and Generalized Empirical Likelihood with \proglang{R}}
-\Plainauthor{Pierre Chauss\'e}
-\Plaintitle{Computing Generalized Method of Moments and Generalized Empirical Likelihood with R}
-\Shorttitle{GMM and GEL with R}
+\maketitle
-\Abstract{This paper shows how to estimate models by the generalized
+\abstract{This paper shows how to estimate models by the generalized
method of moments and the generalized empirical likelihood using the
- \proglang{R} package \pkg{gmm}. A brief discussion is offered on the
+ \proglang{R} package \textbf{gmm}. A brief discussion is offered on the
theoretical aspects of both methods and the functionality of the
package is presented through several examples in economics and
finance. It is a modified version of \cite{chausse10} published in
@@ -25,34 +60,18 @@
the Journal of Statistical Software. It has been adapted to the
version 1.4-0. \textbf{Notice that the maintenance of the package is
converging to zero. The new \pkg{momentfit} package, available on
- CRAN, will soon replace the \pkg{gmm} package.} }
+ CRAN, will soon replace the \pkg{gmm} package.}}
-\Keywords{generalized empirical likelihood,
- generalized method of moments, empirical likelihood, continuous
- updated estimator, exponential tilting, exponentially tilted
- empirical likelihood, \proglang{R}} \Plainkeywords{generalized
- empirical likelihood, generalized method of moments, empirical
- likelihood, continuous updated estimator, exponential tilting,
- exponentially tilted empirical likelihood, R}
-
-\Address{
- Pierre Chaus\'e\\
- Department of Economics\\
- University of Waterloo\\
- Waterloo (Ontario), Canada\\
- E-mail: \email{pchausse at uwaterloo.ca}
-}
-\begin{document}
-
-\SweaveOpts{engine=R,eps=FALSE}
%\VignetteIndexEntry{Computing Generalized Empirical Likelihood and Generalized Method of Moments with R}
%\VignetteDepends{gmm,mvtnorm,stabledist, car, MASS, timeDate, timeSeries}
%\VignetteKeywords{generalized empirical likelihood, generalized method of moments, empirical likelihood, continuous updated estimator, exponential tilting, exponentially tilted empirical likelihood}
%\VignettePackage{gmm}
+%\VignetteEngine{knitr::knitr}
+<<echo=FALSE>>=
+library(knitr)
+opts_chunk$set(size='footnotesize', fig.height=5, out.width='70%')
+@
-\newcommand\Real{ \mathbb{R} }
-\newcommand\Complex{ \mathbb{C} }
-\newcommand\rv{{\cal R}}
\section{Introduction}
@@ -62,7 +81,7 @@
Asymptotic properties of GMM and generalized empirical likelihood (GEL) are now well established in the econometric literature. \cite{newey-smith04} and \cite{anatolyev05} have compared their second order asymptotic properties. In particular, they show that the second order bias of the empirical likelihood (EL) estimator, which is a special case of GEL, is smaller than the bias of the estimators from the three GMM methods. Furthermore, as opposed to GMM, the bias does not increase with the number of moment conditions. Since the efficiency improves when the number of conditions goes up, this is a valuable property. However, these are only asymptotic results which do not necessarily hold in small sample as shown by \cite{guggenberger08}. In order to analyze small sample properties, we have to rely on Monte Carlo simulations. However, Monte Carlo studies on methods such as GMM or GEL depend on complicated algorithms which are often home made. Because of that, results from such studies are not easy to reproduce. The solution should be to use a common tool which can be tested and improved upon by the users. Because it is open source, \proglang{R} offers a perfect platform for such tool.
-The \pkg{gmm} package allows to estimate models using the three GMM methods, the empirical likelihood and the exponential tilting, which belong to the family of GEL methods, and the exponentially tilted empirical likelihood which was proposed by \cite{schennach07}, Also it offers several options to estimate the covariance matrix of the moment conditions. Users can also choose between \code{optim}, if no restrictions are required on the coefficients of the model to be estimated, and either \code{nlminb} or \code{constrOptim} for constrained optimizations. The results are presented in such a way that \proglang{R} users who are familiar with \code{lm} objects, find it natural. In fact, the same methods are available for \code{gmm} and \code{gel} objects produced by the estimation procedures.
+The \pkg{gmm} package allows to estimate models using the three GMM methods, the empirical likelihood and the exponential tilting, which belong to the family of GEL methods, and the exponentially tilted empirical likelihood which was proposed by \cite{schennach07}, Also it offers several options to estimate the covariance matrix of the moment conditions. Users can also choose between \textit{optim}, if no restrictions are required on the coefficients of the model to be estimated, and either \textit{nlminb} or \textit{constrOptim} for constrained optimizations. The results are presented in such a way that \proglang{R} users who are familiar with \textit{lm} objects, find it natural. In fact, the same methods are available for \textit{gmm} and \textit{gel} objects produced by the estimation procedures.
The paper is organized as follows. Section 2 presents the theoretical aspects of the GMM method along with several examples in economics and finance. Through these examples, the functionality of the \pkg{gmm} packages is presented in details. Section 3 presents the GEL method with some of the examples used in section 2. Section 4 concludes and Section 5 gives the computational details of the package.
@@ -168,11 +187,11 @@
library(gmm)
@
-The main function is \code{gmm()} which creates an object of class \code{gmm}. Many options are available but in many cases they can be set to their default values. They are explained in details below through examples. The main arguments are \code{g} and \code{x}. For a linear model, \code{g} is a formula like \code{y~z1+z2} and \code{x} the matrix of instruments. In the nonlinear case, they are respectively the function $g(\theta,x_i)$ and its argument. The available methods are \code{coef}, \code{vcov}, \code{summary}, \code{residuals}, \code{fitted.values}, \code{plot}, \code{confint}. The model and data in a \code{data.frame} format can be extracted by the generic function \code{model.frame}.
+The main function is \textit{gmm()} which creates an object of class \textit{gmm}. Many options are available but in many cases they can be set to their default values. They are explained in details below through examples. The main arguments are \textit{g} and \textit{x}. For a linear model, \textit{g} is a formula like \textit{y~z1+z2} and \textit{x} the matrix of instruments. In the nonlinear case, they are respectively the function $g(\theta,x_i)$ and its argument. The available methods are \textit{coef}, \textit{vcov}, \textit{summary}, \textit{residuals}, \textit{fitted.values}, \textit{plot}, \textit{confint}. The model and data in a \textit{data.frame} format can be extracted by the generic function \textit{model.frame}.
\subsection{Estimating the parameters of a normal distribution}
- This example\footnote{Thanks to Dieter Rozenich for his suggestion.}, is not something we want to do in practice, but its simplicity allows us to understand how to implement the \code{gmm()} procedure by providing the gradient of $g(\theta,x_i)$. It is also a good example of the weakness of GMM when the moment conditions are not sufficiently informative. In fact, the ML estimators of the mean and the variance of a normal distribution are more efficient because the likelihood carries more information than few moment conditions.
+ This example\footnote{Thanks to Dieter Rozenich for his suggestion.}, is not something we want to do in practice, but its simplicity allows us to understand how to implement the \textit{gmm()} procedure by providing the gradient of $g(\theta,x_i)$. It is also a good example of the weakness of GMM when the moment conditions are not sufficiently informative. In fact, the ML estimators of the mean and the variance of a normal distribution are more efficient because the likelihood carries more information than few moment conditions.
For the two parameters of a normal distribution $(\mu,\sigma)$ we have the following vector of moment conditions:
\[
@@ -221,17 +240,17 @@
n <- 200
x1 <- rnorm(n, mean = 4, sd = 2)
@
-We then run \code{gmm} using the starting values $(\mu_0,\sigma^2_0)=(0,0)$
+We then run \textit{gmm} using the starting values $(\mu_0,\sigma^2_0)=(0,0)$
<<>>=
print(res <- gmm(g1,x1,c(mu = 0, sig = 0), grad = Dg))
@
-The \code{summary} method prints more results from the estimation:
+The \textit{summary} method prints more results from the estimation:
<<>>=
summary(res)
@
The section "Initial values of the coefficients" shows the first step estimates used to either compute the weighting matrix in the 2-step GMM or the fixed bandwidth in CUE or iterative GMM.
-The J-test of over-identifying restrictions can also be extracted by using the method \code{specTest}:
+The J-test of over-identifying restrictions can also be extracted by using the method \textit{specTest}:
<<>>=
specTest(res)
@
@@ -257,7 +276,7 @@
return(list(bias=bias,Variance=Var,MSE=MSE))
}
@
-The following results can be reproduced with $n=50$, $iter=2000$ and by setting \code{set.seed(345)}:
+The following results can be reproduced with $n=50$, $iter=2000$ and by setting \textit{set.seed(345)}:
\begin{center}
\begin{tabular}{|c|c|c|c||c|c|c|} \hline
&\multicolumn{3}{c||}{$\mu$}&\multicolumn{3}{c}{$\sigma$} \\ \hline
@@ -285,7 +304,7 @@
\end{array}
\right. ,
\]
-The function \code{charStable} included in the package computes the characteristic function and can be used to construct $g(\theta,x_i)$. To avoid dealing with complex numbers, it returns the imaginary and real parts in separate columns because both should have zero expectation. The function is:
+The function \textit{charStable} included in the package computes the characteristic function and can be used to construct $g(\theta,x_i)$. To avoid dealing with complex numbers, it returns the imaginary and real parts in separate columns because both should have zero expectation. The function is:
<<>>=
g2 <- function(theta,x)
{
@@ -301,7 +320,7 @@
return(gt)
}
@
-The parameters of a simulated random vector can be estimated as follows (by default, $\gamma$ and $\delta$ are set to $1$ and $0$ respectively in \code{rstable}). For the example, the starting values are the ones of a normal distribution with mean 0 and variance equals to \code{var(x)}:
+The parameters of a simulated random vector can be estimated as follows (by default, $\gamma$ and $\delta$ are set to $1$ and $0$ respectively in \textit{rstable}). For the example, the starting values are the ones of a normal distribution with mean 0 and variance equals to \textit{var(x)}:
<<>>=
library(stabledist)
set.seed(345)
@@ -309,11 +328,11 @@
t0 <- c(alpha = 2, beta = 0, gamma = sd(x2)/sqrt(2), delta = 0)
print(res <- gmm(g2,x2,t0))
@
-The result is not very close to the true parameters. But we can see why by looking at the J-test that is provided by the \code{summary} method:
+The result is not very close to the true parameters. But we can see why by looking at the J-test that is provided by the \textit{summary} method:
<<>>=
summary(res)
@
-The null hypothesis that the moment conditions are satisfied is rejected. For nonlinear models, a significant J-test may indicate that we have not reached the global minimum. Furthermore, the standard deviation of the coefficient of $\delta$ indicates that the covariance matrix is nearly singular. Notice also that the convergence code is equal to 1, which indicates that the algorithm did not converge. We could try different starting values, increase the number of iterations in the control option of \code{optim} or use \code{nlminb} which allows to put restrictions on the parameter space. The former would work but the latter will allow us to see how to select another optimizer. The option \code{optfct} can be modified to use this algorithm instead of \code{optim}. In that case, we can specify the upper and lower bounds of $\theta$.
+The null hypothesis that the moment conditions are satisfied is rejected. For nonlinear models, a significant J-test may indicate that we have not reached the global minimum. Furthermore, the standard deviation of the coefficient of $\delta$ indicates that the covariance matrix is nearly singular. Notice also that the convergence code is equal to 1, which indicates that the algorithm did not converge. We could try different starting values, increase the number of iterations in the control option of \textit{optim} or use \textit{nlminb} which allows to put restrictions on the parameter space. The former would work but the latter will allow us to see how to select another optimizer. The option \textit{optfct} can be modified to use this algorithm instead of \textit{optim}. In that case, we can specify the upper and lower bounds of $\theta$.
<<>>=
res2 <- gmm(g2,x2,t0,optfct="nlminb",lower=c(0,-1,0,-Inf),upper=c(2,1,Inf,Inf))
summary(res2)
@@ -320,8 +339,8 @@
@
Although the above modification solved the convergence problem, there is another issue that we need to address. The first step estimate used to compute the weighting matrix is almost identical to the starting values. There is therefore a convergence problem in the first step. In fact, choosing the initial $\alpha$ to be on the boundary was not a wise choice. Also, it seems that an initial value of $\beta$ equals to zero makes the objective function harder to minimize. Having a gobal minimum for the first step estimate is important if we care about efficiency. A wrong estimate will cause the weighting matrix not being a consistent estimate of the optimal matrix. The information about convergence is included in the argument 'initialAlgoInfo' of the gmm object.
-We conclude this example by estimating the parameters for a vector of stock returns from the data set \code{Finance} that comes with the \pkg{gmm} package.
-<<>>=
+We conclude this example by estimating the parameters for a vector of stock returns from the data set \textit{Finance} that comes with the \pkg{gmm} package.
+<<warning=FALSE>>=
data(Finance)
x3 <- Finance[1:1500,"WMK"]
t0<-c(alpha = 1.8, beta = 0.1, gamma = sd(x3)/sqrt(2),delta = 0)
@@ -328,7 +347,7 @@
res3 <- gmm(g2,x3,t0,optfct="nlminb")
summary(res3)
@
-For this sub-sample, the hypothesis that the return follows a stable distribution is rejected. The normality assumption can be analyzed by testing $H_0:\alpha=2,\beta=0$ using \code{linearHypothesis} from the \pkg{car} package:
+For this sub-sample, the hypothesis that the return follows a stable distribution is rejected. The normality assumption can be analyzed by testing $H_0:\alpha=2,\beta=0$ using \textit{linearHypothesis} from the \pkg{car} package:
<<>>=
library(car)
linearHypothesis(res3,cbind(diag(2),c(0,0),c(0,0)),c(2,0))
@@ -365,7 +384,7 @@
w <- exp(-x4^2) + e[,1]
y <- 0.1*w + e[,2]
@
-where \code{rmvnorm} is a multivariate normal distribution random generator which is included in the package \pkg{mvtnorm} (\cite{mvtnorm}). For a linear model, the $g$ argument is a formula that specifies the right- and left-hand sides as for \code{lm} and $x$ is the matrix of instruments:
+where \textit{rmvnorm} is a multivariate normal distribution random generator which is included in the package \pkg{mvtnorm} (\cite{mvtnorm}). For a linear model, the $g$ argument is a formula that specifies the right- and left-hand sides as for \textit{lm} and $x$ is the matrix of instruments:
<<>>=
h <- cbind(x4, x4^2, x4^3)
g3 <- y~w
@@ -382,11 +401,11 @@
\]
In order the remove the intercept, -1 has to be added to the formula. In that case there is no column of ones added to the matrix of instruments. To keep the condition that the expected value of the error terms is zero, the column of ones needs to be included manually.
-We know that the moment conditions of this example are iid. Therefore, we can add the option \code{vcov="iid"}. This option tells \code{gmm} to estimate the covariance matrix of $\sqrt{n}\bar{g}(\theta^*)$ as follows:
+We know that the moment conditions of this example are iid. Therefore, we can add the option \textit{vcov="iid"}. This option tells \textit{gmm} to estimate the covariance matrix of $\sqrt{n}\bar{g}(\theta^*)$ as follows:
\[
\hat{\Omega}(\theta^*) = \frac{1}{n}\sum_{i=1}^n g(\theta^*,x_i)g(\theta^*,x_i)'
\]
-However, it is recommended not to set this option to ``iid" in practice with real data because one of the reasons we want to use GMM is to avoid such restrictions. Finally, it is not necessary to provide the gradient when the model is linear since it is already included in \code{gmm}. The first results are:
+However, it is recommended not to set this option to ``iid" in practice with real data because one of the reasons we want to use GMM is to avoid such restrictions. Finally, it is not necessary to provide the gradient when the model is linear since it is already included in \textit{gmm}. The first results are:
<<>>=
summary(res <- gmm(g3,x=h))
@
@@ -395,22 +414,23 @@
res2 <- gmm(g3,x=h,type='iterative',crit=1e-8,itermax=200)
coef(res2)
@
-The procedure iterates until the difference between the estimates of two successive iterations reaches a certain tolerance level, defined by the option \code{crit} (default is $10^{-7}$), or if the number of iterations reaches \code{itermax} (default is 100). In the latter case, a message is printed to indicate that the procedure did not converge.
+The procedure iterates until the difference between the estimates of two successive iterations reaches a certain tolerance level, defined by the option \textit{crit} (default is $10^{-7}$), or if the number of iterations reaches \textit{itermax} (default is 100). In the latter case, a message is printed to indicate that the procedure did not converge.
-The third method is CUE. As you can see, the estimates from ITGMM is used as starting values. However, the starting values are required only when \code{g} is a function. When \code{g} is a formula, the default starting values are the ones obtained by setting the matrix of weights equal to the identity matrix.
+The third method is CUE. As you can see, the estimates from ITGMM is used as starting values. However, the starting values are required only when \textit{g} is a function. When \textit{g} is a formula, the default starting values are the ones obtained by setting the matrix of weights equal to the identity matrix.
<<>>=
res3 <- gmm(g3,x=h,res2$coef,type='cue')
coef(res3)
@
-It is possible to produce confidence intervals by using the method \code{confint}:
+It is possible to produce confidence intervals by using the method \textit{confint}:
<<>>=
confint(res3,level=.90)
@
-Whether \code{optim} or \code{nlminb} is used to compute the solution, it is possible to modify their default options by adding \code{control=list()}. For example, you can keep track of the convergence with \code{control=list(trace=TRUE)} or increase the number of iterations with \code{control=list(maxit=1000)}. You can also choose the \code{BFGS} algorithm with \code{method="BFGS"} (see \code{help(optim)} for more details).
+Whether \textit{optim} or \textit{nlminb} is used to compute the solution, it is possible to modify their default options by adding \textit{control=list()}. For example, you can keep track of the convergence with \textit{control=list(trace=TRUE)} or increase the number of iterations with \textit{control=list(maxit=1000)}. You can also choose the \textit{BFGS} algorithm with \textit{method="BFGS"} (see \textit{help(optim)} for more details).
-The methods \code{fitted} and \code{residuals} are also available for linear models. We can compare the fitted values of \code{lm} with the ones from \code{gmm} to see why this model cannot be estimated by LS.
+The methods \textit{fitted} and \textit{residuals} are also available for linear models. We can compare the fitted values of \textit{lm} with the ones from \textit{gmm} to see why this model cannot be estimated by LS.
-<<fig=true>>=
+\begin{center}
+<<>>=
plot(w,y,main="LS vs GMM estimation")
lines(w,fitted(res),col=2)
lines(w,fitted(lm(y~w)),col=3,lty=2)
@@ -417,10 +437,11 @@
lines(w,.1*w,col=4,lty=3)
legend("topleft",c("Data","Fitted GMM","Fitted LS","True line"),pch=c(1,NA,NA,NA),col=1:3,lty=c(NA,1,2,3))
@
+\end{center}
The LS seems to fit the model better. But the graphics hides the endogeneity problem. LS overestimates the relationship between $y$ and $w$ because it does not take into account the fact that some of the correlation is caused by the fact that $y_i$ and $w_i$ are positively correlated with the error term $\epsilon_i$.
-Finally, the \code{plot} method produces some graphics to analyze the properties of the residuals. It can only be applied to \code{gmm} objects when \code{g} is a formula because when \code{g} is a function, residuals are not defined.
+Finally, the \textit{plot} method produces some graphics to analyze the properties of the residuals. It can only be applied to \textit{gmm} objects when \textit{g} is a formula because when \textit{g} is a function, residuals are not defined.
\subsection{Estimating the AR coefficients of an ARMA process} \label{ar}
@@ -430,7 +451,7 @@
\[
X_t = 1.4 X_{t-1} - 0.6X_{t-2} + u_t
\]
-where $u_t = 0.6\epsilon_{t-1} -0.3 \epsilon_{t-2} + \epsilon_t$ and $\epsilon_t\sim iidN(0,1)$. This model can be estimated by GMM using any $X_{t-s}$ for $s>2$, because they are uncorrelated with $u_t$ and correlated with $X_{t-1}$ and $X_{t-2}$. However, as $s$ increases the quality of the instruments decreases since the stationarity of the process implies that the auto-correlation goes to zero. For this example, the selected instruments are $(X_{t-3},X_{t-4},X_{t-5},X_{t-6})$ and the sample size equals 400. The ARMA(2,2) process is generated by the function \code{arima.sim}:
+where $u_t = 0.6\epsilon_{t-1} -0.3 \epsilon_{t-2} + \epsilon_t$ and $\epsilon_t\sim iidN(0,1)$. This model can be estimated by GMM using any $X_{t-s}$ for $s>2$, because they are uncorrelated with $u_t$ and correlated with $X_{t-1}$ and $X_{t-2}$. However, as $s$ increases the quality of the instruments decreases since the stationarity of the process implies that the auto-correlation goes to zero. For this example, the selected instruments are $(X_{t-3},X_{t-4},X_{t-5},X_{t-6})$ and the sample size equals 400. The ARMA(2,2) process is generated by the function \textit{arima.sim}:
<<>>=
t <- 400
set.seed(345)
@@ -444,7 +465,7 @@
@
The optimal matrix, when moment conditions are based on time series, is an HAC matrix which is defined by equation (\ref{optw_hat}). Several estimators of this matrix have been proposed in the literature. Given some regularity conditions, they are asymptotically equivalent. However, their impacts on the finite sample properties of GMM estimators may differ. The \pkg{gmm} package uses the \pkg{sandwich} package to compute these estimators which are well explained by \cite{zeileis06} and \cite{zeileis04}. We will therefore briefly summarize the available options.
-The option \code{kernel} allows to choose between five kernels: Truncated, Bartlett, Parzen, Tukey-Hanning and Quadratic spectral\footnote{The first three have been proposed by \cite{white84}, \cite{newey-west87a} and \cite{gallant87} respectively and the last two, applied to HAC estimation, by \cite{andrews91}. But the latter gives a good review of all five.}. By default, the Quadratic Spectral kernel is used as it was shown to be optimal by \cite{andrews91} with respect to some mean squared error criterion. In most statistical packages, the Bartlett kernel is used for its simplicity. It makes the estimation of large models less computationally intensive. It may also make the \code{gmm} algorithm more stable numerically when dealing with highly nonlinear models, especially with CUE. We can compare the results with different choices of kernel:
+The option \textit{kernel} allows to choose between five kernels: Truncated, Bartlett, Parzen, Tukey-Hanning and Quadratic spectral\footnote{The first three have been proposed by \cite{white84}, \cite{newey-west87a} and \cite{gallant87} respectively and the last two, applied to HAC estimation, by \cite{andrews91}. But the latter gives a good review of all five.}. By default, the Quadratic Spectral kernel is used as it was shown to be optimal by \cite{andrews91} with respect to some mean squared error criterion. In most statistical packages, the Bartlett kernel is used for its simplicity. It makes the estimation of large models less computationally intensive. It may also make the \textit{gmm} algorithm more stable numerically when dealing with highly nonlinear models, especially with CUE. We can compare the results with different choices of kernel:
<<>>=
res2 <- gmm(g4,x=x5t[,4:7],kernel="Truncated")
coef(res2)
@@ -455,7 +476,7 @@
res5<- gmm(g4,x=x5t[,4:7],kernel="Tukey-Hanning")
coef(res5)
@
-The similarity of the results is not surprising since the matrix of weights should only affect the efficiency of the estimator. We can compare the estimated standard deviations using the method \code{vcov}:
+The similarity of the results is not surprising since the matrix of weights should only affect the efficiency of the estimator. We can compare the estimated standard deviations using the method \textit{vcov}:
<<>>=
diag(vcov(res2))^.5
diag(vcov(res3))^.5
@@ -464,19 +485,20 @@
@
which shows, for this example, that the Bartlett kernel generates the estimates with the smallest variances. However, it does not mean it is better. We have to run simulations and compute the true variance if we want to compare them. In fact, we do not know which one produces the most accurate estimate of the variance.
-The second options is for the bandwidth selection. By default it is the automatic selection proposed by \cite{andrews91}. It is also possible to choose the automatic selection of \cite{newey-west94} by adding \code{bw=bwNeweyWest} (without quotes because \code{bwNeweyWest} is a function). A prewhitened kernel estimator can also be computed using the option \code{prewhite=p}, where $p$ is the order of the vector auto-regressive (VAR) used to compute it. By default, it is set to \code{FALSE}. \cite{andrews-monahan92} show that a prewhitened kernel estimator improves the properties of hypothesis tests on parameters.
+The second options is for the bandwidth selection. By default it is the automatic selection proposed by \cite{andrews91}. It is also possible to choose the automatic selection of \cite{newey-west94} by adding \textit{bw=bwNeweyWest} (without quotes because \textit{bwNeweyWest} is a function). A prewhitened kernel estimator can also be computed using the option \textit{prewhite=p}, where $p$ is the order of the vector auto-regressive (VAR) used to compute it. By default, it is set to \textit{FALSE}. \cite{andrews-monahan92} show that a prewhitened kernel estimator improves the properties of hypothesis tests on parameters.
-Finally, the \code{plot} method can be applied to \code{gmm} objects to do a Q-Q plot of the residuals:
-
-<<fig=true,height=7, width=10>>=
+Finally, the \textit{plot} method can be applied to \textit{gmm} objects to do a Q-Q plot of the residuals:
+\begin{center}
+<<>>=
plot(res,which=2)
@
-
+\end{center}
or to plot the observations with the fitted values:
-
-<<fig=true,height=7, width=10>>=
+\begin{center}
+<<>>=
plot(res,which=3)
@
+\end{center}
\subsection{Estimating a system of equations: CAPM}
@@ -501,7 +523,7 @@
linearHypothesis(res,R,c,test = "Chisq")
@
where the asymptotic chi-square is used since the default distribution requires a normality assumption. The same test could have been performed using the names of the coefficients:
-<<eval=false>>=
+<<eval=FALSE>>=
test <- paste(names(coef(res)[1:5])," = 0",sep="")
linearHypothesis(res,test)
@
@@ -553,7 +575,7 @@
\end{array}
\right) = 0
\]
-The related \code{g} function, with $\theta=\{\alpha, \beta, \sigma^2, \gamma\}$ is:
+The related \textit{g} function, with $\theta=\{\alpha, \beta, \sigma^2, \gamma\}$ is:
<<>>=
g6 <- function(theta, x) {
t <- length(x)
@@ -563,7 +585,7 @@
return(g)
}
@
-In order to estimate the model, the vector of interest rates needs to be properly scaled to avoid numerical problems. The transformed series is the annualized interest rates expressed in percentage. Also, the starting values are obtained using LS and some options for \code{optim} need to be modified.
+In order to estimate the model, the vector of interest rates needs to be properly scaled to avoid numerical problems. The transformed series is the annualized interest rates expressed in percentage. Also, the starting values are obtained using LS and some options for \textit{optim} need to be modified.
<<>>=
rf <- Finance[,"rf"]
rf <- ((1 + rf/100)^(365) - 1) * 100
@@ -585,8 +607,8 @@
\[
(y_{it}-\bar{y}_i) = (x_{it}-\bar{x}_i) \beta + (\epsilon_{it}-\bar{\epsilon}_i) \mbox{ for } i=1,...,N\mbox{ and } t=1,...,T,
\]
-which can be estimated by \code{gmm}. For example, if there are 3 individuals the following corresponds to the GMM fixed effects estimation:
-<<eval=false>>=
+which can be estimated by \textit{gmm}. For example, if there are 3 individuals the following corresponds to the GMM fixed effects estimation:
+<<eval=FALSE>>=
y <- rbind(y1-mean(y1),y2-mean(y2),y3-mean(y3))
x <- rbind(x1-mean(x1),x2-mean(x2),x3-mean(x3))
res <- gmm(y~x,h)
@@ -596,7 +618,7 @@
y_{it} = x_{it} \beta + \eta_{it}
\]
[TRUNCATED]
To get the complete diff run:
svnlook diff /svnroot/gmm -r 182
More information about the Gmm-commits
mailing list