[Eventstudiescommits] r344  pkg/vignettes
noreply at rforge.rproject.org
noreply at rforge.rproject.org
Thu May 15 19:24:26 CEST 2014
Author: chiraganand
Date: 20140515 19:24:26 +0200 (Thu, 15 May 2014)
New Revision: 344
Removed:
pkg/vignettes/AMM.Rnw
pkg/vignettes/AMM.bib
pkg/vignettes/ees.Rnw
pkg/vignettes/ees.bib
pkg/vignettes/new.Rnw
Log:
Removed old vignettes.
Deleted: pkg/vignettes/AMM.Rnw
===================================================================
 pkg/vignettes/AMM.Rnw 20140515 17:08:18 UTC (rev 343)
+++ pkg/vignettes/AMM.Rnw 20140515 17:24:26 UTC (rev 344)
@@ 1,176 +0,0 @@
\documentclass[a4paper,11pt]{article}
\usepackage{graphicx}
\usepackage{a4wide}
\usepackage[colorlinks,linkcolor=blue,citecolor=red]{hyperref}
\usepackage{natbib}
\usepackage{float}
\usepackage{tikz}
\usepackage{parskip}
\usepackage{amsmath}
\title{Augmented Market Models}
\author{Ajay Shah \and Vikram Bahure \and Chirag Anand}
\begin{document}
%\VignetteIndexEntry{eventstudies: Extreme events functionality}
% \VignetteDepends{}
% \VignetteKeywords{extreme event analysis}
% \VignettePackage{eventstudies}
\maketitle

\begin{abstract}
The document demonstrates the application of Augmented market model
(AMM) from the paper \citet{patnaik2010amm} to extract currency
exposure and AMM residuals from the model.
\end{abstract}

\SweaveOpts{engine=R,pdf=TRUE}
\section{Introduction}

Augmented market models (AMM) extends the classical market model \citep{sharpe1964capm, lintner1965capm} to introduce additional right hand side variables like currency returns or interest rates to understand the effect of macro variations in addition to market movements on stock returns. The package provides functionality to estimate augmented market models as well as produce augmented market model residuals (AMM abnormal returns) stripped of market and macro variations to run event studies. The function set was originally written and applied in \citet{patnaik2010amm}. \citet{adler1984exposure} and \citet{jorion1990exchange} are the first papers to use augmented market models to study currency exposure. The standard currency exposure AMM is as follows

\begin{equation}
 r_j = \alpha_j + \beta_{1j} r_{M1} + \beta_{2j} r_{M2} + \epsilon
\end{equation}

In the original usage of augmented market models, Currency exposure is
expressed as the regression coefficient on currency returns (M2). The
model uses firm stock price as the information set of firm positions
and it relates firm returns $r_j$ to market index movements $r_{M1}$
and currency fluctuations $r_{M2}$. The coefficient $\beta_{2j}$
measures the sensitivity of the valuation of firm $j$ to changes in
the exchange rate. This is a widely used technique with multiple
variations including asymmetric exposures.

The AMM implementation in the package has some key innovations as compared to the original implementation of currency exposure AMM's by \citet{adler1984exposure} and \citet{jorion1990exchange}.
\begin{equation}
 r_{jt} = \alpha + \beta_1 r_{M1,t}
 + \sum_{i=0}^{k} a_i e_{ti} + \epsilon_t
\end{equation}

\begin{enumerate}
\item Exchange rate series is reexpressed as a series of innovations with an AIC selected AR process. Under this specification, an innovation $e_t$ on the currency market has an impact on the stock price at time $t$ and the following $k$ time periods. Under the above model, currency exposure is embedded in the vector of $a_i$ coefficients; it is no longer a simple scalar $\beta_2$ as was the case under the standard model
\item Heteroscedasticity in $r_{M1}$ \& $r_{M2}$ : This is resolved by
 using a HAC estimator
\item Decomposition of market exposure from firm exposure: Market exposure issue solved by orthogonalising the market index timeseries by first estimating a regression model explaining $r_{M1}$ as a function of past and present currency innovations, and extracting the residual from this regression. These residuals represent uncontaminated market returns
\end{enumerate}

In the section below, we explain the estimation of currency exposure,
AMM residuals and performing event study analysis. In section \ref{sec:ce}, we
replicate the methodology used in \citet{patnaik2010amm} using the
package. In section \ref{sec:es}, we take the AMM methodology a step ahead to
extract residuals from AMM methodology which we use the to
perform traditional event study analysis.
% Need to talk more about generalisation used for variables other than currency


\section{Software approach}\label{sec:ce}
The package has functions which enable the user to compute linear
model AMM output, along with currency exposure, using the AMM
methodology employed in \citet{patnaik2010amm}. In the subsections
below we describe construction of dataset to input in \texttt{lmAMM}
function and further computing AMM output and currency exposure.

\subsection{Constructing data set}
We need to construct usable data set, before performing AMM analysis
on firm returns using this package. There are two steps to be
followed constructing \texttt{X} (regressors) and firm returns
(regressands), to perform OLS as shown in the \citet{patnaik2010amm}.
\subsubsection{Regressors \& Regressands}
Regressors in the AMM equation are market returns and currency
returns, while regressands is firm returns. All the variables should
have balanced panel if not then merge the time series variable to get
one. \textit{AMMData} is an time series object with market returns as
\textit{Nifty} and currency returns as \textit{INR/USD}. If
currency exposure is to be estimated for different periods separately
then argument \textit{dates} will be helpful or else \textit{NULL}
will be provided to perform for full period.

The function \textit{makeX} considers that
there is impact of currency on market returns and with the argument
\textit{market.returns.purge}, we orthogonalise the market returns to currency
returns before using AMM model.

<<>>=
# Create RHS before running subperiod.lmAMM()
library(eventstudies)
data("AMMData")
nifty < AMMData$index.nifty
inrusd < AMMData$currency.inrusd
regressand < AMMData[,c("Infosys","TCS")]
regressors < makeX(nifty, others=inrusd,
 switch.to.innov=TRUE, market.returns.purge=TRUE, nlags=1,
 dates=as.Date(c("20120201","20130101","20140120")), verbose=FALSE)
@

\subsection{Augmented market model}
Augmented market model output with a class of \textit{amm} is
generated using the function \texttt{lmAMM}. This function takes firm
returns (regressand) and regressor as input. Output of \texttt{lmAMM}
function is a list object with linear model output of AMM,
currency exposure, standard deviation and significance of the
exposure.
<<>>=
## AMM residual to time series
timeseries.lmAMM < function(firm.returns,X,verbose=FALSE,nlags=1){
 tmp < resid(lmAMM(firm.returns,X,nlags))
 tmp.res < zoo(tmp,as.Date(names(tmp)))
}
## One firm
amm.output.one < lmAMM(regressand[,1],X=regressors,nlags=1)
amm.resid.one < timeseries.lmAMM(firm.returns=regressand[,1],
 X=regressors, verbose=FALSE, nlags=1)
summary(amm.output.one)

## More than one firm
 # Extracting and merging
tmp.resid < sapply(colnames(regressand)[1:2],function(y)
 timeseries.lmAMM(firm.returns=regressand[,y],
 X=regressors,
 verbose=FALSE,
 nlags=1))
amm.resid < zoo(tmp.resid,as.Date(rownames(tmp.resid)))
@

All the basic functionality are available for object with class
\textit{amm}. \texttt{print},\texttt{summary} and \texttt{plot}
commands can be used to do preliminary analysis. The plot
\ref{fig:amm} compares the AMM residuals with abnormal firm returns.
\begin{figure}[t]
 \begin{center}
 \label{fig:amm}
 \caption{Augment market model}
 \setkeys{Gin}{width=0.8\linewidth}
 \setkeys{Gin}{height=0.8\linewidth}
<<fig=TRUE,echo=FALSE>>=
plot(amm.output.one)
@
 \end{center}
 \label{fig:one}
\end{figure}

\subsection{Getting currency exposure}
The output of \texttt{makeX} function is used in \textit{subperiod.lmAMM} and
\textit{lmAMM} function to get currency exposure of the firms and AMM
residuals respectively. In the example below, we demonstrate the use
of \textit{subperiod.lmAMM} function to estimate currency exposure for
firms.
% MakeX and subperiod.lmAMM
<<>>=
# Run AMM for one firm across different periods
 deprintize<function(f){
 return(function(...) {capture.output(w<f(...));return(w);});
 }
firm.exposure < deprintize(subperiod.lmAMM)(firm.returns=regressand[,1],
 X=regressors,
 nlags=1,
 verbose=TRUE,
 dates= as.Date(c("20120201",
 "20130101","20140131")))
str(firm.exposure)
@

 We can also perform event study analysis, directly on AMM residuals
 using \textit{eventstudy} function. which is presented in
 \textit{eventstudies} vignette.

\bibliographystyle{jss} \bibliography{AMM}
\end{document}
Deleted: pkg/vignettes/AMM.bib
===================================================================
 pkg/vignettes/AMM.bib 20140515 17:08:18 UTC (rev 343)
+++ pkg/vignettes/AMM.bib 20140515 17:24:26 UTC (rev 344)
@@ 1,53 +0,0 @@

 at article{patnaik2010amm,
 title={Does the currency regime shape unhedged currency exposure?},
 author={Patnaik, Ila and Shah, Ajay},
 journal={Journal of International Money and Finance},
 volume={29},
 number={5},
 pages={760769},
 year={2010},
 publisher={Elsevier}
}

 at article{sharpe1964capm,
 title={Capital asset Prices: A Theory of market equilibrium under conditions of risk},
 author={Sharpe, William F},
 journal={The Journal of Finance},
 volume={19},
 number={3},
 pages={425442},
 year={1964},
 publisher={Wiley Online Library}
}

 at article{lintner1965capm,
 title={The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets},
 author={Lintner, John},
 journal={The Review of Economics and Statistics},
 volume={47},
 number={1},
 pages={1337},
 year={1965},
 publisher={JSTOR}
}

 at article{adler1984exposure,
 title={Exposure to currency risk: definition and measurement},
 author={Adler, Michael and Dumas, Bernard},
 journal={Financial management},
 pages={4150},
 year={1984},
 publisher={JSTOR}
}

 at article{jorion1990exchange,
 title={The exchangerate exposure of US multinationals},
 author={Jorion, Philippe},
 journal={Journal of Business},
 pages={331345},
 year={1990},
 publisher={JSTOR}
}


Deleted: pkg/vignettes/ees.Rnw
===================================================================
 pkg/vignettes/ees.Rnw 20140515 17:08:18 UTC (rev 343)
+++ pkg/vignettes/ees.Rnw 20140515 17:24:26 UTC (rev 344)
@@ 1,245 +0,0 @@
\documentclass[a4paper,11pt]{article}
\usepackage{graphicx}
\usepackage{a4wide}
\usepackage[colorlinks,linkcolor=blue,citecolor=red]{hyperref}
\usepackage{natbib}
\usepackage{float}
\usepackage{tikz}
\usepackage{parskip}
\usepackage{amsmath}
\title{Introduction to the \textbf{extreme events} functionality}
\author{Vikram Bahure \and Vimal Balasubramaniam \and Ajay Shah}
\begin{document}
% \VignetteIndexEntry{eventstudies: Extreme events functionality}
% \VignetteDepends{}
% \VignetteKeywords{extreme event analysis}
% \VignettePackage{eventstudies}
\maketitle

\begin{abstract}
 One specific application of the eventstudies package is
 \citet{PatnaikShahSingh2013}. This vignette reproduces results from
 the paper and explains a specific functionality of the pacakge: to
 perform analysis of tail events. \texttt{ees} is a wrapper available
 in the package for users to undertake similar ``extremeevents''
 analysis.
\end{abstract}

\SweaveOpts{engine=R,pdf=TRUE}

\section{Introduction}

Extreme events functionality is the analysis of an outcome variable
and its behaviour around tail events of another variable, the event
variable. This package includes an extreme events functionality as a
wrapper in \texttt{ees}.

Nonparametric studies of events on tails poses several research
challenges:

\begin{enumerate}
\item What constitutes tail events, i.e., the cutoff points on the
 distribution of the event variable?
\item What is the event window, i.e., the window of observation before
 and after the event?
\item What happens when multiple tail events (``Clustered events'')
 occur within the event window?
\end{enumerate}

We facilitate these important technical questions with summary
statistics on the distribution and run length of events, quantile
values to determine the cutoff points on the distribution of the
event variable, and depending on the frequency of analysis,
periodwise distribution of extreme events. An analysis of all these
summary statistics for clustered and unclustered events exist as
well. This wrapper provides results for both cases: only unclustered
events and both types of events.

In the next few sections, we replicate a subsection of results from
\citet{PatnaikShahSingh2013} that studies whether extreme events on
the S\&P 500 affects returns on the Indian stock index, the
Nifty. Detailed mathematical overview of the methodology is available
in the paper.


\section{Extreme event analysis}

Since the object of interest is the impact on returns of the outcome
variable, nifty, with tail events on the S\&P 500, we first obtain a
zoo object of returns data (``EESData''). Next, we define tail events
for a given probability value; if \textit{prob.value} is 5, then
returns that fall under $05\%$ and $95100\%$ of the probability
distribution form our set of events.

<<>>==
library(eventstudies)
data(EESData)

input < EESData$sp500

deprintize<function(f){
 return(function(...){
 capture.output(w<f(...));return(w);})
}
output < deprintize(ees)(input, prob.value=5)
@

As mentioned earlier, one of the most important aspect of a
nonparametric approach to an event study is if the
parameters for such an exercise is validated by the general summary
statistics of the data set being used. The object \texttt{output} is a
list of various relevant summary statistics for the data set, and with
an extreme event analysis for lower and upper tails. For each of the
tails, the following statistics are available:

\begin{enumerate}
\item Extreme events data set (The input for event study analysis)
\item Distribution of clustered and unclustered tail events
\item Distribution of the run length
\item Quantile values of tail events
\item Yearly distribution of tail events
\end{enumerate}

\subsection{Summary statistics}

In \texttt{output\$data.summary}, we present the minimum, maximum,
interquartile range (IQR), standard deviation (sd), and the
distribution at 5\%, 25\%, Median, Mean, 75\%, and 95\%. This analysis
for the S\&P 500 is identical to the results presented in Table 1 of
Patnaik, Shah and Singh (2013).

<<>>==
output$data.summary
@

\subsection{Extreme events dataset}

The output for upper tail and lower tail are in the same format as
mentioned above. The data set is a time series object with 2 columns;
the first column \textit{event.series} contains returns for extreme
events and the second column \textit{cluster.pattern} records the
number of consecutive days in the cluster. Here we show results for
the lower tail of S\&P 500.

The overall dataset looks as follows:

<<>>==
head(output$lower.tail$data)
str(output$lower.tail$data)
@

\subsection{Distribution of clustered and unclustered events}

There are several types of clusters in an analysis of extreme
events. Clusters that are purely on either of the tails, or are
mixed. Events that have mixed clusters typically witness sharp
positive returns in the outcome variable, and soon after observing
large negative returns. This ``contamination'' might cause serious
downward bias in the magnitude and direction of impact due to an
extreme event. Therefore, it will be useful to ensure that such
occurrences are not included in the analysis.\footnote{While this is
 interesting to study such mixed events by themselves, it is not the
 subject for the specific question posed in this vignette.}

Results from Table 2 of Patnaik, Shah and Singh (2013) show that there
are several mixed clusters in the data set. In other words, there are
many events on the S\&P 500 that provide large positive (negative)
returns followed by large negative (positive) returns in the data
set. As we look closely at the lower tail events in this vignette, the
output for the lower tail events looks like this:

<<>>=
output$lower.tail$extreme.event.distribution
@

``\texttt{unclstr}'' refers to unclustered events,
``\texttt{used.clstr}'' refers to the clusters that are pure and
uncontaminated by mixed tail events, ``\texttt{removed.clstr}'' refers
to the mixed clusters. For the analysis in Patnaik, Shah and Singh
(2013) only 62 out of 102 events are used. These results are identical
to those documented in Table 2 of the paper.

\subsection{Run length distribution of clusters}

The next concern is the run length distribution of clusters used in
the analysis. Run length shows the total number of clusters with
\textit{n} consecutive days of its occurence. In the example used
here, we have 3 clusteres with \textit{two} consecutive events and 0
clusters with \textit{three} consecutive events. This is also
identical the one presented in the paper by Patnaik, Shah and Singh
(2013).

<<>>=
output$lower.tail$runlength
@

\subsection{Extreme event quantile values}
Quantile values show 0\%, 25\%, median, 75\%,100\% and mean values for
the extreme events data. The results shown below match the second row
of Table 4 in the paper.

<<>>=
output$lower.tail$quantile.values
@

\subsection{Yearly distribution of extreme events}
This table shows the yearly distribution and the median value for
extreme events data. The results shown below are in line with the
third and forth column for S\&P 500 in the Table 5 of the paper.

<<>>=
output$lower.tail$yearly.extreme.event
@

The yearly distribution for extreme events include unclustered event
and clustered events which are fused. While in extreme event
distribution of clustered and unclustered event, the clustered events
are defined as total events in a cluster. For example, if there is a
clustered event with three consecutive extreme events then we treat
that as a single event for analysis.

\section{Extreme event study plot}

The significance of an event study can be summarised well by visual
representations. With the steps outlined in the \texttt{eventstudies}
vignette, the wrapper \texttt{eesPlot} in the package provides a
convenient user interface to replicate Figure 7 from Patnaik, Shah and
Singh (2013). The plot presents events on the upper tail as ``Very
good'' and lower tail as ``Very bad'' on the event variable S\&P
500. The outcome variable studied here is the Nifty, and the yaxis
presents the cumulative returns in Nifty. This is an event graph,
where data is centered on event date (``0'') and the graph shows 4
days before and after the event.

<<>>=
eesPlot(z=EESData, response.series.name="nifty", event.series.name="sp500", titlestring="S&P500", ylab="(Cum.) change in NIFTY", prob.value=5, width=5)
@

\begin{figure}[t]
 \begin{center}
 \caption{Extreme event on S\&P500 and response of NIFTY}
 \setkeys{Gin}{width=1\linewidth}
 \setkeys{Gin}{height=0.8\linewidth}
<<fig=TRUE,echo=FALSE>>=
res < deprintize(eesPlot)(z=EESData, response.series.name="nifty",
 event.series.name="sp500",
 titlestring="S&P500",
 ylab="(Cum.) change in NIFTY",
 prob.value=5, width=5)
@
 \end{center}
 \label{fig:one}
\end{figure}

\section{Computational details}
The package code is written in R. It has dependencies to zoo
(\href{http://cran.rproject.org/web/packages/zoo/index.html}{Zeileis
 2012}) and boot
(\href{http://cran.rproject.org/web/packages/boot/index.html}{Ripley
 2013}). R itself as well as these packages can be obtained from
\href{http://CRAN.Rproject.org/}{CRAN}.

% \section{Acknowledgments}
\bibliographystyle{jss} \bibliography{ees}

\end{document}
Deleted: pkg/vignettes/ees.bib
===================================================================
 pkg/vignettes/ees.bib 20140515 17:08:18 UTC (rev 343)
+++ pkg/vignettes/ees.bib 20140515 17:24:26 UTC (rev 344)
@@ 1,10 +0,0 @@
 at Article{PatnaikShahSingh2013,
 author = {Patnaik, Ila and Shah, Ajay and Singh, Nirvikar},
 title = {Foreign Investors Under Stress: Evidence from India },
 journal = {International Finance},
 year = 2013,
volume = 16,
number= 2,
pages = {213244}
}

Deleted: pkg/vignettes/new.Rnw
===================================================================
 pkg/vignettes/new.Rnw 20140515 17:08:18 UTC (rev 343)
+++ pkg/vignettes/new.Rnw 20140515 17:24:26 UTC (rev 344)
@@ 1,260 +0,0 @@
\documentclass[a4paper,11pt]{article}
\usepackage{graphicx}
\usepackage{a4wide}
\usepackage[colorlinks,linkcolor=blue,citecolor=red]{hyperref}
\usepackage{natbib}
\usepackage{float}
\usepackage{tikz}
\usepackage{parskip}
\usepackage{amsmath}
\title{Introduction to the \textbf{eventstudies} package in R}
\author{Ajay Shah}
\begin{document}
\maketitle

\begin{abstract}
\end{abstract}
\SweaveOpts{engine=R,pdf=TRUE}

\section{The standard event study in finance}

In this section, we look at using the eventstudies package for the
purpose of doing the standard event study using daily returns data in
financial economics. This is a workhorse application of event
studies. The treatment here assumes knowledge of event studies
\citep{Corrado2011}.

To conduct an event study, you must have a list of firms with
associated dates, and you must have returns data for these
firms. These dates must be stored as a simple data frame. To
illustrate this, we use the object `SplitDates' in the package which
is used for doing examples.

<<showtheevents,results=verbatim>>=
library(eventstudies)
data(SplitDates) # The sample
str(SplitDates) # Just a data frame
head(SplitDates)
@

The representation of dates is a data frame with two columns. The
first column is the name of the unit of observation which experienced
the event. The second column is the event date.

The second thing that is required for doing an event study is data for
stock price returns for all the firms. The sample dataset supplied in
the package is named `StockPriceReturns':

<<showtheevents,results=verbatim>>=
data(StockPriceReturns) # The sample
str(StockPriceReturns) # A zoo object
head(StockPriceReturns,3) # Time series of dates and returns.
@

The StockPriceReturns object is thus a zoo object which is a time
series of daily returns. These are measured in per cent, i.e. a value
of +4 is returns of +4\%. The zoo object has many columns of returns
data, one for each unit of observation which, in this case, is a
firm. The column name of the zoo object must match the firm name
(i.e. the name of the unit of observation) in the list of events.

The package gracefully handles the three kinds of problems encountered
with real world data: (a) a firm where returns is observed but there
is no event, (b) a firm with an event where returns data is lacking
and (c) a stream of missing data in the returns data surrounding the
event date.

With this in hand, we are ready to run our first event study, using
raw returns:

<<noadjustment>>=
es < eventstudy(firm.returns = StockPriceReturns,
 eventList = SplitDates,
 width = 10,
 type = "None",
 to.remap = TRUE,
 remap = "cumsum",
 inference = TRUE,
 inference.strategy = "bootstrap")
@

This runs an event study using events listed in SplitDates, and using
returns data for the firms in StockPriceReturns. An event window of 10
days is analysed.

Event studies with returns data typically do some kind of adjustment
of the returns data in order to reduce variance. In order to keep
things simple, in this first event study, we are doing no adjustment,
which is done by setting `type' to ``None''.

While daily returns data has been supplied, the standard event study
deals with cumulated returns. In order to achieve this, we set
to.remap to TRUE and we ask that this remapping be done using cumsum.

Finally, we come to inference strategy. We instruct eventstudy to do
inference and ask for bootstrap inference.

Let us peek and poke at the object `es' that is returned.

<<theesobject,results=verbatim>>=
class(es)
str(es)
@

The object returned by eventstudy is of class `es'. It is a list with
five components. Three of these are just a record of the way
eventstudy() was run: the inference procedure adopted (bootstrap
inference in this case), the window width (10 in this case) and the
method used for mapping the data (cumsum). The two new things are
`outcomes' and `eventstudy.output'.

The vector `outcomes' shows the disposition of each event in the
events table. There are 22 rows in SplitDates, hence there will be 22
elements in the vector `outcomes'. In this vector, `success' denotes a
successful use of the event. When an event cannot be used properly,
various error codes are supplied. E.g. `unitmissing' is reported when
the events table shows an event for a unit of observation where
returns data is not observed.

\begin{figure}
\begin{center}
<<plotes,fig=TRUE,width=4,height=2.5>>=
par(mai=c(.8,.8,.2,.2))
plot(es, cex.axis=.7, cex.lab=.7)
@
\end{center}
\caption{Plot method applied to es object}\label{f:esplot1}
\end{figure}

% TODO: The x label should be "Event time (days)" and should
% automatically handle other situations like weeks or months or microseconds.
% The y label is much too long.

Plot and print methods for the class `es' are supplied. The standard
plot is illustrated in Figure \ref{f:esplot1}. In this case, we see
the 95\% confidence interval is above 0 and below 0 and in no case can
the null of noeffect, compared with the starting date (10 days before
the stock split date), be rejected.

In this first example, raw stock market returns was utilised in the
event study. It is important to emphasise that the event study is a
statistically valid tool even under these circumstances. Averaging
across multiple events isolates the eventrelated
fluctuations. However, there is a loss of statistical efficiency that
comes from fluctuations of stock prices that can have nothing to do
with firm level news. In order to increase efficiency, we resort to
adjustment of the returns data.

The standard methodology in the literature is to use a market
model. This estimates a timeseries regression $r_{jt} = \alpha_j +
\beta_j r_{Mt} + \epsilon_{jt}$ where $r_{jt}$ is returns for firm $j$
on date $t$, and $r_{Mt}$ is returns on the market index on date
$t$. The market index captures marketwide fluctuations, which have
nothing to do with firmspecific factors. The event study is then
conducted with the cumulated $\epsilon_{jt}$ time series. This yields
improved statistical efficiency as $\textrm{Var}(\epsilon_j) <
\textrm{Var}(r_j)$.

This is invoked by setting `type' to `marketResidual':

<<mmadjustment>>=
data(OtherReturns)
es.mm < eventstudy(firm.returns = StockPriceReturns,
 eventList = SplitDates,
 width = 10,
 type = "marketResidual",
 to.remap = TRUE,
 remap = "cumsum",
 inference = TRUE,
 inference.strategy = "bootstrap",
 market.returns=OtherReturns$NiftyIndex
 )
@

In addition to setting `type' to `marketResidual', we are now required
to supply data for the market index, $r_{Mt}$. In the above example,
this is the data object NiftyIndex supplied from the OtherReturns data
object in the package. This is just a zoo vector with daily returns of
the stock market index.

\begin{figure}
\begin{center}
<<plotesmm,fig=TRUE,width=4,height=2.5>>=
par(mai=c(.8,.8,.2,.2))
plot(es.mm, cex.axis=.7, cex.lab=.7)
@
\end{center}
\caption{Adjustment using the market model}\label{f:esplotmm}
\end{figure}

A comparison of the range of the $y$ axis in Figure \ref{f:esplot1}
versus that seen in Figure \ref{f:esplotmm} shows the substantial
improvement in statistical efficiency that was obtained by market
model adjustment.

We close our treatment of the standard finance event study with one
step forward on further reducing $\textrm{Var}(\epsilon)$ : by doing
an `augmented market model' regression with more than one explanatory
variable. The augmented market model uses regressions like:

\[
r_{jt} = \alpha_j + \beta_1,j r_{M1,t} + \beta_2,j r_{M2,t}
 \epsilon_{jt}
\]

where in addition to the market index $r_{M1,t}$, there is an
additional explanatory variable $r_{M2,t}$. One natural candidate is
the returns on the exchange rate, but there are many other candidates.

An extensive literature has worked out the unique problems of
econometrics that need to be addressed in doing augmented market
models. The package uses the synthesis of this literature as presented
in \citet{patnaik2010amm}.\footnote{The source code for augmented
 market models in the package is derived from the source code written
 for \citet{patnaik2010amm}.}

To repeat the stock splits event study using augmented market models,
we use the incantation:

% Check some error
<<ammadjustment>>=
es.amm < eventstudy(firm.returns = StockPriceReturns,
 eventList = SplitDates,
 width = 10,
 type = "lmAMM",
 to.remap = TRUE,
 remap = "cumsum",
 inference = TRUE,
 inference.strategy = "bootstrap",
 market.returns=OtherReturns$NiftyIndex,
 others=OtherReturns$USDINR,
 market.returns.purge=TRUE
 )
@

Here the additional regressor on the augmented market model is the
returns on the exchange rate, which is the slot USDINR in
OtherReturns. The full capabilities for doing augmented market models
from \citet{patnaik2010amm} are available. These are documented
elsewhere. For the present moment, we will use the feature
market.returns.purge without explaining it.

Let us look at the gains in statistical efficiency across the three
variants of the event study. We will use the width of the confidence
interval at date 0 as a measure of efficiency.

<<efficiencycomparison,results=verbatim>>=
tmp < rbind(es$eventstudy.output[10,], es.mm$eventstudy.output[10,])[,c(1,3)]
rownames(tmp) < c("None","MM")
tmp[,2]tmp[,1]
@

This shows a sharp reduction in the width of the bootstrap 95\%
confidence interval from None to MM adjustment. Over and above this, a
small gain is obtained when going from MM adjustment to AMM
adjustment.

\newpage
\bibliographystyle{jss} \bibliography{es}

\end{document}
More information about the Eventstudiescommits
mailing list