[Eventstudiescommits] r148  pkg/vignettes
noreply at rforge.rproject.org
noreply at rforge.rproject.org
Mon Oct 28 20:16:50 CET 2013
Author: vimsaa
Date: 20131028 20:16:50 +0100 (Mon, 28 Oct 2013)
New Revision: 148
Modified:
pkg/vignettes/ees.Rnw
pkg/vignettes/eventstudies.Rnw
Log:
Editing for next version of eventstudies complete.
Modified: pkg/vignettes/ees.Rnw
===================================================================
 pkg/vignettes/ees.Rnw 20131028 16:39:42 UTC (rev 147)
+++ pkg/vignettes/ees.Rnw 20131028 19:16:50 UTC (rev 148)
@@ 8,7 +8,7 @@
\usepackage{parskip}
\usepackage{amsmath}
\title{Introduction to the \textbf{extreme events} functionality}
\author{Ajay Shah, Vimal Balasubramaniam and Vikram Bahure}
+\author{Vikram Bahure \and Vimal Balasubramaniam \and Ajay Shah}
\begin{document}
%\VignetteIndexEntry{eventstudies: Extreme events functionality}
% \VignetteDepends{}
@@ 27,8 +27,7 @@
\SweaveOpts{engine=R,pdf=TRUE}
\section{Introduction}
An extremeevent analysis is an analysis of an outcome variable surrounding
a tail (either right or left tail) event on another variable. This \textit{eventstudies} package includes an extreme events
+An extremeevent analysis is an analysis of an outcome variable surrounding a tail (either right or left tail) event on another variable. This \textit{eventstudies} package includes an extreme events
functionality as a wrapper in \texttt{ees}.
There are several concerns with an extremeevent analysis. Firstly, what happens when multiple tail events (``Clustered events'') occur within one another? We facilitate this analysis with summary statistics on the distribution and run length of events, quantile values to determine ``tail events'', and yearly distribution of the extremeevents. Secondly, do results change when we use ``clustered events'' and ``unclustered events'' separately, or, together in the same analysis? This wrapper also facilitates such sensitivity analysis in the study of extremeevents.
@@ 108,28 +107,19 @@
@
\subsection{Extreme event quantile values}
Quantile values show 0\%, 25\%, median, 75\%,100\% and mean values for
the extreme events data. The results shown below match the second row
of Table 4 in the paper.
+Quantile values show 0\%, 25\%, median, 75\%,100\% and mean values for the extreme events data. The results shown below match the second row of Table 4 in the paper.
<<>>=
output$lower.tail$quantile.values
@
\subsection{Yearly distribution of extreme events}
This table shows the yearly distribution and
the median value for extreme events data. The results shown below
are in line with the third and forth column for S\&P 500 in the Table 5 of the
paper.
+This table shows the yearly distribution and the median value for extreme events data. The results shown below are in line with the third and forth column for S\&P 500 in the Table 5 of the paper.
<<>>=
output$lower.tail$yearly.extreme.event
@
The yearly distribution for extreme events include unclustered event
and clustered events which are fused. While in extreme event distribution of
clustered and unclustered event, the clustered events are defined as
total events in a cluster. For example, if there is a clustered event
with three consecutive extreme events then we treat that as a single event for analysis.
+The yearly distribution for extreme events include unclustered event and clustered events which are fused. While in extreme event distribution of clustered and unclustered event, the clustered events are defined as total events in a cluster. For example, if there is a clustered event with three consecutive extreme events then we treat that as a single event for analysis.
\section{Extreme event study plot}
Modified: pkg/vignettes/eventstudies.Rnw
===================================================================
 pkg/vignettes/eventstudies.Rnw 20131028 16:39:42 UTC (rev 147)
+++ pkg/vignettes/eventstudies.Rnw 20131028 19:16:50 UTC (rev 148)
@@ 8,7 +8,7 @@
\usepackage{parskip}
\usepackage{amsmath}
\title{Introduction to the \textbf{eventstudies} package in R}
\author{Vikram Bahure and Renuka Sane and Ajay Shah\thanks{We thank
+\author{Vikram Bahure \and Vimal Balasubramaniam \and Ajay Shah\thanks{We thank
Chirag Anand for valuable inputs in the creation of this vignette.}}
\begin{document}
% \VignetteIndexEntry{eventstudies: A package with functionality to do Event Studies}
@@ 18,10 +18,10 @@
\maketitle
\begin{abstract}
 Event study analysis is a ubiquitous tool in the econometric
 analysis of an event and its impact on the measured
+ Event study analysis is an important tool in the econometric
+ analysis of an event and its impact on a measured
outcome. Although widely used in finance, it is a generic tool
 that can be used for other purposes as well. There is, however,
+ that can be used in other disciplines as well. There is, however,
no single repository to undertake such an analysis with
R. \texttt{eventstudies} provides the toolbox to carry out an
eventstudy analysis. It contains functions to transform data
@@ 34,147 +34,84 @@
\section{Introduction}
Event study methodology has been primarily used to evaluate the
impact of specific events on the value of a firm. The typical
procedure for conducting an event study involves
+Event study methodology has been primarily used to evaluate the impact of specific events on the value of a firm. The typical procedure for conducting an event study involves
\citep{MacKinlay1997}:
\begin{enumerate}
\item Defining the event of interest and the event window. The
 event window should be larger than the specific period of
 interest.
\item Determining a measure of abnormal returns, the most common
 being the \textit{constant mean return model} and the
 \textit{market model}. This is important to disentangle the
 effects on stock prices of information that is specific to the
 firm under question (e.g. stock split announcement) and
 information that is likely to affect all stock prices
 (e.g. interest rates).
+\item Defining the event of interest and the event window. The event window should be larger than the specific period of interest.
+\item Determining a measure of abnormal returns, the most common being the \textit{constant mean return model} and the \textit{market model}. This is important to disentangle the effects on stock prices of information that is specific to the firm under question (e.g. stock split announcement) and information that is likely to affect all stock prices (e.g. interest rates).
\item Analysis of firm returns on or after the event date.
\end{enumerate}
The \textbf{eventstudies} package brings together the various
aspects of an event study analysis in one package. It provides for
functions to calculate returns, transform data into eventtime,
and inference procedures. All functions in this package are
implemented in the R system for statistical computing. The
package, and R are available at no cost under the terms of the
general public license (GPL) from the comprehensive R archive
network (CRAN, \texttt{http://CRAN.Rproject.org}).
+The \textbf{eventstudies} package brings together the various aspects of an event study analysis in one package. It provides for functions to calculate returns, transform data into eventtime, and inference procedures. All functions in this package are implemented in the R system for statistical computing. The package, and R are available at no cost under the terms of the general public license (GPL) from the comprehensive R archive network (CRAN, \texttt{http://CRAN.Rproject.org}).
This paper is organised as follows. A skeletal event study model
is presented in Section \ref{s:model}. Section \ref{s:approach}
discusses the software approach used in this package. Section
\ref{s:example} shows an example.
+This paper is organised as follows. A skeletal event study model is presented in Section \ref{s:model}. Section \ref{s:approach} discusses the software approach used in this package. Section \ref{s:example} shows an example.
\section{Skeletal event study model} \label{s:model}
In this section, we present a model to evaluate the impact of
stock splits on returns \citep{Corrado2011}.
+In this section, we present a model to evaluate the impact of stock splits on returns \citep{Corrado2011}.
Let day $0$ identify the stock split date under scrutiny and let
days t = $...,3,2,1$ represent trading days leading up to the
event. If the return on the firm with the stock split $R_o$ is
statistically large compared to returns on previous dates, we may
conclude that the stock split event had a significant price
impact.
+Let day $0$ identify the stock split date under scrutiny and let days t = $...,3,2,1$ represent trading days leading up to the event. If the return on the firm with the stock split $R_o$ is statistically large compared to returns on previous dates, we may conclude that the stock split event had a significant price impact.
To disentangle the impact of the stock split on the returns of the
firm from general marketwide information, we use the marketmodel
to adjust the eventdate return, thus removing the influence of
market information.
+To disentangle the impact of the stock split on the returns of the firm from general marketwide information, we use the marketmodel to adjust the eventdate return, thus removing the influence of market information.
The market model is calculated as follows:
\[ R_t = a + b RM_t + e_t \]
The firmspecific return $e_t$ is unrelated to the overall market
and has an expected value of zero. Hence, the expected event date
return conditional on the event date market return is
+The firmspecific return $e_t$ is unrelated to the overall market and has an expected value of zero. Hence, the expected event date return conditional on the event date market return is
\[ E(R_0RM_0) = a + b RM_0 \]
The abnormal return $A_0$ is simply the dayzero firmspecific
return $e_0$:
+The abnormal return $A_0$ is simply the dayzero firmspecific return $e_0$:
\[ A_0 = R_0 E(R_0RM_0) = R_0  a  b RM_0 \]
A series of abnormal returns from previous periods are also
calculated for comparison, and to determine statistical
significance.
+A series of abnormal returns from previous periods are also calculated for comparison, and to determine statistical significance.
\[ A_t = R_t E(R_tRM_t) = R_t  a  b RM_t \]
The event date abnormal return $A_0$ is then assessed for
statistical significance relative to the distribution of abnormal
returns $A_t$ in the control period. A common assumption used to
formulate tests of statistical significance is that abnormal
returns are normally distributed.
+The event date abnormal return $A_0$ is then assessed for statistical significance relative to the distribution of abnormal returns $A_t$ in the control period. A common assumption used to formulate tests of statistical significance is that abnormal returns are normally distributed. However, such distributional assumptions may not be necessary with nonparametric procedures. For detailed exposition on the theoretical framework of eventstudies, please refer to % Insert Corrado (2011) and Campbell, Lo, McKinlay ``Econometrics of Financial Markets''
\section{Software approach} \label{s:approach}
\textbf{eventstudies} offers the following functionalities:
\begin{itemize}
\item Models for calculating returns
\item Procedures for converting data to eventtime and remapping
 eventframe
+\item Models for calculating idiosyncratic returns
+\item Procedures for converting data from physical time into event time
\item Procedures for inference
\end{itemize}
\subsection{Models for calculating returns}
+\subsection{Models for calculating idiosyncratic returns}
Firm returns can be calculated using the following functions:
\begin{itemize}
\item \texttt{excessReturn}: estimation of excess returns
 i.e. $R_j  R_m$ where $R_j$ is the return of firm $j$ and $R_m$
 is the market return.
+\item \texttt{excessReturn}: estimation of excess returns i.e. $R_j  R_m$ where $R_j$ is the return of firm $j$ and $R_m$ is the market return.
\item \texttt{marketResidual}: estimation of market model to
 obtain idiosyncratic firm returns, controlling for the market
 returns.
+\item \texttt{marketResidual}: estimation of market model to obtain idiosyncratic firm returns, controlling for the market returns.
\item \texttt{AMM}: estimation of the augmented market model which
 provides user the capability to run a multivariate market model
 with orthogonalisation and obtain idiosyncratic returns.
+\item \texttt{AMM}: estimation of the augmented market model which provides user the capability to run a multivariate market model with orthogonalisation and obtain idiosyncratic returns.
\end{itemize}
The function \texttt{AMM} is a generic function that allows users
to run an augmented market model and undertake the analysis of the
market model in a multivariate setting and obtain the idiosyncratic returns.
Often times, there is a need for an auxilliary regression that purges the effect of the explanatory
variables on one another. This function allows for the estimation of such a residual for a single
firm using the function \texttt{onefirmAMM}. Advanced users may also want to look at \texttt{manyfirmsAMM}.
+The function \texttt{AMM} is a generic function that allows users to run an augmented market model and undertake the analysis of the market model in a multivariate setting and obtain idiosyncratic returns. Often times, there is a need for an auxilliary regression that purges the effect of the explanatory variables on one another. This function allows for the estimation of such a residual for a single firm using the function \texttt{onefirmAMM}. Advanced users may also want to look at \texttt{manyfirmsAMM}.
The output from all these models are also time series objects of class ``zoo'' or ``xts''. This
becomes the input for the remaining steps in the event study analysis, of which the first step is to convert a timeseries object into the eventtime frame.
+The output from all these models are also time series objects of class ``zoo'' or ``xts''. This becomes the input for the remaining steps in the event study analysis, of which the first step is to convert a timeseries object into the eventtime frame.
\subsection{Converting the dataset to an event time}
+\subsection{Converting data from physical time into event time}
The conversion of the returns data to eventtime, and to
cumulate returns is done using the following functions:
+The conversion of the returns data to eventtime, and to cumulate returns is done using the following functions:
\begin{itemize}
\item \texttt{phys2eventtime}: conversion to an event frame. This
 requires a time series object of stock price returns (our outcome variable) and a
 data frame with two columns \textit{outcome.unit} and \textit{event.date}, the
 firms and the date on which the event occurred respectively.
+\item \texttt{phys2eventtime}: conversion to an event frame. This requires a time series object of stock price returns (our outcome variable) and a data frame with two columns \textit{outcome.unit} and \textit{event.date}, the firms and the date on which the event occurred respectively.
\item \texttt{remap.cumsum}: conversion of returns to cumulative
 returns. The input for this function is the timeseries object in
 eventtime that is obtained as the output from \texttt{phys2eventtime}.
+\item \texttt{remap.cumsum}: conversion of returns to cumulative returns. The input for this function is the timeseries object in eventtime that is obtained as the output from \texttt{phys2eventtime}.
\end{itemize}
The function \texttt{phys2eventtime} is generic and can handle objects of any time frequency,
including intraday high frequency data. While \texttt{remap.cumsum} is sufficiently general
to be used on any time series object for which we would like to obtain cumulative values, in
this context, the attempt is to cumulate idiosyncratic returns to obtain a clear identification
of the magnitude and size of the impact of an event. % TODO: Cite Brown and Warner (1983) here.
+The function \texttt{phys2eventtime} is generic and can handle objects of any time frequency, including intraday high frequency data. While \texttt{remap.cumsum} is sufficiently general to be used on any time series object for which we would like to obtain cumulative values, in this context, the attempt is to cumulate idiosyncratic returns to obtain a clear identification of the magnitude and size of the impact of an event. % TODO: Cite Brown and Warner (1983) here.
At this point of analysis, we hold one important data object organised in eventtime, where each
column of this object corresponds to the event on the outcome unit, with values before and after the
event organised as before and after $T,(T1),...,3,2,1,0,1,2,3,...,T1,T$. The package, once again, is very general and allows users to decide on the number of time units before and after the event that must be used for statistical inference.
+At this point of analysis, we hold one important data object organised in eventtime, where each column of this object corresponds to the event on the outcome unit, with values before and after the event organised as before and after $T,(T1),...,3,2,1,0,1,2,3,...,T1,T$. The package, once again, is very general and allows users to decide on the number of time units before and after the event that must be used for statistical inference.
\subsection{Procedures for inference}
@@ 188,54 +125,37 @@
generate the distribution of cumulative returns series.
\end{itemize}
The second stage in the analysis of eventstudies is statistical inference. At present, we have two different inference procedures incorporated into the package. The first of the two, \texttt{inference.wilcox} is a traditional test of inference for eventstudies. The second inference procedure, \texttt{inference.bootstrap} is another nonparametric procedure that exploits the multiplicity of outcome units for which such an event has taken place. For example, a corporate action event such as stock splits may have taken place for many firms (outcome units) at different points in time. This crosssectional variation in outcome is exploited by the bootstrap inference procedure.
+The last stage in the analysis of eventstudies is statistical inference. At present, we have two different inference procedures incorporated into the package. The first of the two, \texttt{inference.wilcox} is a traditional test of inference for eventstudies. The second inference procedure, \texttt{inference.bootstrap} is another nonparametric procedure that exploits the multiplicity of outcome units for which such an event has taken place. For example, a corporate action event such as stock splits may have taken place for many firms (outcome units) at different points in time. This crosssectional variation in outcome is exploited by the bootstrap inference procedure.
The inference procedures would generally require no more than the object generated in the first stage of our analysis, for instance, the cumulative returns in eventtime (\texttt{es.w}), and whether the user wants a plot of the results using the inference procedure used.
+The inference procedures would generally require no more than the object generated in the second stage of our analysis, for instance, the cumulative returns in eventtime (\texttt{es.w}), and will ask whether the user wants a plot of the results using the inference procedure used.
We intend to expand the suite of inference procedures available for analysis to include the more traditional procedures such as the Patell $ttest$.
+We intend to expand the suite of inference procedures available for analysis to include the more traditional procedures such as the Patell $t$test.
\section{Example: Performing eventstudy analysis}
\label{s:example}
+\section{Performing eventstudy analysis: An example}\label{s:example}
In this section, we demonstrate the package with a study of the impact of stock
splits on the stock prices of firms. We use the returns series of
the thirty index companies, as of 2013, of the Bombay Stock
Exchange (BSE), from 2001 to 2013. We have stock split dates for
each firm from 2000 onwards.
+In this section, we demonstrate the package with a study of the impact of stock splits on the stock prices of firms. We use the returns series of the thirty index companies, as of 2013, of the Bombay Stock Exchange (BSE), between 2001 and 2013. We also have stock split dates for each firm since 2000.
Our data consists of a \textit{zoo} object for stock price returns
for the thirty firms. This is called \textit{StockPriceReturns}
and another zoo object, \textit{nifty.index}, of market returns.
+Our data consists of a \textit{zoo} object for stock price returns for the thirty firms. This is our ``outcome variable'' of interest, and is called \textit{StockPriceReturns}. Another zoo object, \textit{nifty.index}, contains a time series of market returns.
<<>>=
library(eventstudies)
data(StockPriceReturns)
data(nifty.index)
str(StockPriceReturns)
## head(StockPriceReturns)
head(StockPriceReturns[rowSums(is.na((StockPriceReturns)))==3,1:3])
head(nifty.index)
@
The dates of interest and the firms on which the event occurred
are stored in a data frame, \textit{SplitDates} with two columns
\textit{unit}, the name of the firms, and \textit{when}, the date
of the occurrence of the event. \textit{unit} should be in
\textit{character} format and \textit{when} in \textit{Date}
format.
+As required by the package, the event date (the dates on which stock splits occured for these 30 firms) for each firm is recorded in \textit{SplitDates} where ``outcome.unit'' is the name of the firm (the column name in ``StockPriceReturns'') and ``event.date'' is when the event took place for that outcome unit. In R, the column ``outcome.unit'' has to be of class ``character'' and ``event.date'' of class ``Date'', as seen below:
<<>>=
data(SplitDates)
head(SplitDates)
@
\subsection{Calculating returns}
+\subsection{Calculating idiosyncratic returns}
The function \texttt{excessReturn} calculates the excess returns
while \texttt{marketResidual} calculates the market model. The two
inputs are \texttt{firm.returns} and \texttt{market.returns}. The
results are stored in \texttt{er.result} and \texttt{mm.result}
respectively.
+Calculating returns, though straightforward, can be done in a variety of different ways. The function \texttt{excessReturn} calculates the excess returns while \texttt{marketResidual} calculates the market model. The two inputs are \texttt{firm.returns} and \texttt{market.returns}. The results are stored in \texttt{er.result} and \texttt{mm.result} respectively. These are the standard idiosyncratic return estimation that is possible with this package.
<<>>= # Excess return
er.result < excessReturn(firm.returns = StockPriceReturns, market.returns = nifty.index)
@@ 255,16 +175,9 @@
@
The \texttt{AMM} model requires a timeseries of the exchange rate
along with firm returns and market returns. This is done by
loading the \textit{inr} data, which is the INRUSD exchange rate
for the same period. The complete dataset consisting of stock
returns, market returns, and exchange rate is first created.
+To provide flexibility to users, a general multivariate framework to estimate idiosyncratic returns, the augmented market model, is also available. In this case, we would like to purge any currency returns from the outcome return of interest, and the \textit{apriori} expectation is that the variance of the residual is reduced in this process. In this case, the \texttt{AMM} model requires a timeseries of the exchange rate along with firm returns and market returns. This is done by loading the \textit{inr} data, which is the INRUSD exchange rate for the same period. The complete data set consisting of stock returns, market returns, and exchange rate is first created.
The inputs into the \texttt{AMM} model also include
\texttt{firm.returns} and \texttt{market.returns}. Currency
returns can be specified using \texttt{others}. Two types of the
AMM model are supported: \textit{residual} and \textit{all}.
+Inputs into the \texttt{AMM} model also include \texttt{firm.returns} and \texttt{market.returns}. Currency returns can be specified using \texttt{others}. In a general case, this proves to be a multivariate specification with the flexibility to run auxiliary regressions to specify the regression appropriately.
% AMM model
<<>>=
@@ 285,15 +198,9 @@
@
\subsection{Conversion to event frame}
+\subsection{Conversion to eventtime frame}
For conversion to event time, the event date and the returns on
that date are indexed to 0. Postevent dates are indexed as
positive, and preevent dates as negative. The conversion is done
using the \texttt{phys2eventtime} function. The function requires
a returns series, \textit{StockPriceReturns}, a dataframe with
event unit and time, \textit{SplitDates}, and the width for
creating the eventframe.
+The conversion from physical time into event time combines the two objects we have constructed till now: \textit{SplitDates} and \textit{StockPriceReturns}. These two objects are input matrices for the function \texttt{phys2eventtime}. With the specification of ``width=10'' in the function, we require phys2eventtime to define a successfull unit entry (an event) in the result as one where there is no missing data for 10 days before and after the event. This is marked as ``success'' in the resulting list object. With data missing, the unit is flagged ``wdatamissing''. In case the event falls outside of the range of physical time provided in the input data, the unit entry will be flagged ``wrongspan'' and if the unit in \textit{SplitDates} is missing in \textit{StockPriceReturns}, we identify these entries as ``unitmissing''. This allows the user to identify successful entries in the sample for an analysis based on event time. In this example, we make use of successful entries in the data and the output object is stored as \textit{es.w}:
<<>>=
es < phys2eventtime(z=StockPriceReturns, events=SplitDates,
@@ 307,50 +214,27 @@
es.w[,1]
@
The output for \texttt{phys2eventtime} is a list. The first
element of a list is a time series object which is converted to
event time.
+The identification of impact of such an event on returns is better represented with cumulative returns as the outcome variable. We cumulate returns on this (event) time series object, by applying the function \texttt{remap.cumsum}.
The second element shows the \textit{outcome} of the
conversion. If the outcome is \textit{success} then all is well
with the given window as specified by the width. If there are too
many NAs within the event window, the outcome is
\textit{wdatamissing}. The outcome for the event date not being
within the span of data for the unit is \textit{wrongspan} while
the outcome if a unit named in events is not in the returns data
is \textit{unitmissing}.

In the example described here, es.w contains the returns in
eventtime form for all the stocks. It contains variables for whom
all data is available.

Once the returns are converted to eventtime,
\texttt{remap.cumsum} function is used to convert the returns to
cumulative returns.

<<>>=
es.cs < remap.cumsum(es.w,is.pc=FALSE,base=0)
es.cs[,1]
@
+The last stage in the analysis of an eventstudy is that of obtaining statistical confidence with the result by using different statistical inference procedures.
+
\subsection{Inference procedures}
\subsubsection{Bootstrap inference}
After converting to event frame and estimating the variable of
interest, we need to check the stability of the result and derive
other estimates like standard errors and confidence intervals. For
this, we generate the sampling distribution for the estimate using
bootstrap inference. A detailed explanation of the methodology is
presented in \citep{PatnaikShahSingh2013}. This specific approach
used here is based on \citet{davison1986efficient}.
+While the package is sufficiently generalised to undertake a wide array of inference procedures, at present it contains only two inference procedures: 1/ The bootstrap and 2/ Wilcoxon Rank test. We look at both in turn below:
The \textit{inference.bootstrap} function does the bootstrap to
generate distribution of $\overline{CR}$. The bootstrap generates
confidence interval at 2.5 percent and 97.5 percent for the
estimate.
+\subsubsection{Bootstrap inference}
+We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citep{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
+\textit{inference.bootstrap} performs the bootstrap to generate distribution of $\overline{CR}$. The bootstrap generates confidence interval at 2.5 percent and 97.5 percent for the estimate.
+
<<>>=
result < inference.bootstrap(es.w=es.cs, to.plot=TRUE)
+print(result)
@
\begin{figure}[t]
@@ 367,11 +251,13 @@
\end{figure}
\subsubsection{Wilcoxon signed rank test}
We next compute the Wilcoxon signed rank test, which is a
nonparametric inference test to compute the confidence interval.
+Another nonparametric inference available and is used widely with event study analysis is the Wilcoxon signed rank test. This package provides a wrapper that uses the function \texttt{wilcox.test} in \texttt{stats}.
+
<<>>=
result < inference.wilcox(es.w=es.cs, to.plot=TRUE)
+result
@
+
\begin{figure}[t]
\begin{center}
\caption{Stock splits event and response of respective stock
@@ 387,9 +273,7 @@
\subsection{General eventstudy function}
\texttt{eventstudy} is a wrapper around all the internal
functions. Several examples of the use of this function are
provided below.
+While the general framework to perform an eventstudy analysis has been explained with an example in detail, the package also has a wrapper that makes use of all functions explained above to generate the end result for analysis. While this is a quick mechanism to study events that fit this style of analysis, we encourage users to make use of the core functionalities to extend and use this package in ways the wrapper does not capture. Several examples of this wrapper \texttt{eventstudy}, is provided below for convenience:
<<>>=
## Event study without adjustment
@@ 426,6 +310,7 @@
@
+The analysis of events has a wide array of tools and procedures available in the econometric literature. The objective in this package is to start with a core group of functionalities that deliver the platform for event studies following which we intend to extend and facilitate more inference procedures for use from this package.
\section{Computational details}
The package code is written in R. It has dependencies to
@@ 436,8 +321,14 @@
2013}). R itself as well as these packages can be obtained from
\href{http://CRAN.Rproject.org/}{CRAN}.
+
+
+
%\section{Acknowledgments}
+
+
+
% \newpage
\bibliographystyle{jss} \bibliography{es}
More information about the Eventstudiescommits
mailing list