From noreply at rforge.rproject.org Mon Oct 28 16:42:24 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Mon, 28 Oct 2013 16:42:24 +0100 (CET)
Subject: [Eventstudiescommits] r146  pkg/vignettes
MessageID: <20131028154224.679A4185EA2@rforge.rproject.org>
Author: vimsaa
Date: 20131028 16:42:23 +0100 (Mon, 28 Oct 2013)
New Revision: 146
Modified:
pkg/vignettes/eventstudies.Rnw
Log:
Modifications to the vignette committed. Work in progress.
Modified: pkg/vignettes/eventstudies.Rnw
===================================================================
 pkg/vignettes/eventstudies.Rnw 20130917 08:39:50 UTC (rev 145)
+++ pkg/vignettes/eventstudies.Rnw 20131028 15:42:23 UTC (rev 146)
@@ 26,7 +26,7 @@
R. \texttt{eventstudies} provides the toolbox to carry out an
eventstudy analysis. It contains functions to transform data
into the eventtime frame and procedures for statistical
 inference. In this vignette, we provide a finance example and
+ inference. In this vignette, we provide an example from the field of finance and
utilise the rich features of this package.
\end{abstract}
@@ 53,7 +53,7 @@
\end{enumerate}
The \textbf{eventstudies} package brings together the various
aspects of an event study analysis in one library. It provides for
+aspects of an event study analysis in one package. It provides for
functions to calculate returns, transform data into eventtime,
and inference procedures. All functions in this package are
implemented in the R system for statistical computing. The
@@ 134,18 +134,21 @@
obtain idiosyncratic firm returns, controlling for the market
returns.
\item \texttt{AMM}: estimation of the Augmented market model which
+\item \texttt{AMM}: estimation of the augmented market model which
provides user the capability to run a multivariate market model
with orthogonalisation and obtain idiosyncratic returns.
\end{itemize}
% Once AMM() is rewritten, one paragraph on the onefirmAMM
% arguments here used with AMM(...).
+The function \texttt{AMM} is a generic function that allows users
+to run an augmented market model and undertake the analysis of the
+market model in a multivariate setting and obtain the idiosyncratic returns.
+Often times, there is a need for an auxilliary regression that purges the effect of the explanatory
+variables on one another. This function allows for the estimation of such a residual for a single
+firm using the function \texttt{onefirmAMM}. Advanced users may also want to look at \texttt{manyfirmsAMM}.
The output from these models is also a timeseries object. This
becomes the input for converting to event time. % Check if I can
 % work with 'xts' and/or 'zoo'?
+The output from all these models are also time series objects of class ``zoo'' or ``xts''. This
+becomes the input for the remaining steps in the event study analysis, of which the first step is to convert a timeseries object into the eventtime frame.
\subsection{Converting the dataset to an event time}
@@ 154,33 +157,47 @@
\begin{itemize}
\item \texttt{phys2eventtime}: conversion to an event frame. This
 requires a time series object of stock price returns and an
 object with two columns \textit{unit} and \textit{when}, the
+ requires a time series object of stock price returns (our outcome variable) and a
+ data frame with two columns \textit{outcome.unit} and \textit{event.date}, the
firms and the date on which the event occurred respectively.
\item \texttt{remap.cumsum}: conversion of returns to cumulative
 returns. The input for this function is the timeseries data in
 eventtime that is the output from \texttt{phys2eventtime}.
+ returns. The input for this function is the timeseries object in
+ eventtime that is obtained as the output from \texttt{phys2eventtime}.
\end{itemize}
+The function \texttt{phys2eventtime} is generic and can handle objects of any time frequency,
+including intraday high frequency data. While \texttt{remap.cumsum} is sufficiently general
+to be used on any time series object for which we would like to obtain cumulative values, in
+this context, the attempt is to cumulate idiosyncratic returns to obtain a clear identification
+of the magnitude and size of the impact of an event. % TODO: Cite Brown and Warner (1983) here.
+
+At this point of analysis, we hold one important data object organised in eventtime, where each
+column of this object corresponds to the event on the outcome unit, with values before and after the
+event organised as before and after $T,(T1),...,3,2,1,0,1,2,3,...,T1,T$. The package, once again, is very general and allows users to decide on the number of time units before and after the event that must be used for statistical inference.
+
\subsection{Procedures for inference}
+
Procedures for inference include:
\begin{itemize}
\item \texttt{inference.bootstrap}: estimation of bootstrap to
 generate the distribution of cumulative returns series.
\item \texttt{inference.wilcox}: estimation of wilcox inference to
generate the distribution of cumulative returns series.
+
+\item \texttt{inference.bootstrap}: estimation of bootstrap to
+ generate the distribution of cumulative returns series.
\end{itemize}

The arguments for both these include \texttt{es.w}, the cumulative
returns in eventtime. The argument \texttt{to.plot} plots the
confidence interval around returns series.
+The second stage in the analysis of eventstudies is statistical inference. At present, we have two different inference procedures incorporated into the package. The first of the two, \texttt{inference.wilcox} is a traditional test of inference for eventstudies. The second inference procedure, \texttt{inference.bootstrap} is another nonparametric procedure that exploits the multiplicity of outcome units for which such an event has taken place. For example, a corporate action event such as stock splits may have taken place for many firms (outcome units) at different points in time. This crosssectional variation in outcome is exploited by the bootstrap inference procedure.
+
+The inference procedures would generally require no more than the object generated in the first stage of our analysis, for instance, the cumulative returns in eventtime (\texttt{es.w}), and whether the user wants a plot of the results using the inference procedure used.
+
+We intend to expand the suite of inference procedures available for analysis to include the more traditional procedures such as the Patell $ttest$.
+
\section{Example: Performing eventstudy analysis}
\label{s:example}
We demonstrate the package with a study of the impact of stock
+In this section, we demonstrate the package with a study of the impact of stock
splits on the stock prices of firms. We use the returns series of
the thirty index companies, as of 2013, of the Bombay Stock
Exchange (BSE), from 2001 to 2013. We have stock split dates for
@@ 411,15 +428,16 @@
\section{Computational details}
The package code is purely written in R. It has dependencies to
+The package code is written in R. It has dependencies to
zoo
(\href{http://cran.rproject.org/web/packages/zoo/index.html}{Zeileis
2012}) and boot
(\href{http://cran.rproject.org/web/packages/boot/index.html}{Ripley
2013}). R itself as well as these packages can be obtained from
\href{http://CRAN.Rproject.org/}{CRAN}.
% \section{Acknowledgments}
+%\section{Acknowledgments}
+
% \newpage
\bibliographystyle{jss} \bibliography{es}
From noreply at rforge.rproject.org Mon Oct 28 17:39:42 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Mon, 28 Oct 2013 17:39:42 +0100 (CET)
Subject: [Eventstudiescommits] r147  pkg/vignettes
MessageID: <20131028163942.869A3185030@rforge.rproject.org>
Author: vimsaa
Date: 20131028 17:39:42 +0100 (Mon, 28 Oct 2013)
New Revision: 147
Modified:
pkg/vignettes/ees.Rnw
Log:
ees.Rnw updated. Work in Progress.
Modified: pkg/vignettes/ees.Rnw
===================================================================
 pkg/vignettes/ees.Rnw 20131028 15:42:23 UTC (rev 146)
+++ pkg/vignettes/ees.Rnw 20131028 16:39:42 UTC (rev 147)
@@ 1,4 +1,3 @@

\documentclass[a4paper,11pt]{article}
\usepackage{graphicx}
\usepackage{a4wide}
@@ 16,24 +15,28 @@
% \VignetteKeywords{extreme event analysis}
% \VignettePackage{eventstudies}
\maketitle
+
\begin{abstract}
The \textit{eventstudies} package includes an extreme events
functionality. This package has \textit{ees}
function which does extreme event analysis by fusing the
consecutive extreme events in a single event. The methods and
functions are elucidated by employing dataset of S\&P 500 and Nifty.
+One specific application of the eventstudies package is Patnaik, Shah and Singh (2013) % TODO: Bibliography please.
+in this document. The function \texttt{ees} is a wrapper available in the package for
+users to undertake similar ``extremeevents'' analysis.
+We replicate the published work of Patnaik, Shah and Singh (2013) % TODO: bibtex please
+and explore this wrapper in detail in this document.
\end{abstract}
\SweaveOpts{engine=R,pdf=TRUE}
\section{Introduction}
Using this function, one can understand the distribution and run
length of the clustered events, quantile values for the extreme
events and yearly distribution of the extreme events. In the sections
below we replicate the analysis for S\&P 500 from the Patnaik, Shah
and Singh (2013) and we generate the extreme event study plot for
event on S\&P 500 and response of NIFTY. A detail methodology is also
discussed in the paper.
+An extremeevent analysis is an analysis of an outcome variable surrounding
+a tail (either right or left tail) event on another variable. This \textit{eventstudies} package includes an extreme events
+functionality as a wrapper in \texttt{ees}.
+
+There are several concerns with an extremeevent analysis. Firstly, what happens when multiple tail events (``Clustered events'') occur within one another? We facilitate this analysis with summary statistics on the distribution and run length of events, quantile values to determine ``tail events'', and yearly distribution of the extremeevents. Secondly, do results change when we use ``clustered events'' and ``unclustered events'' separately, or, together in the same analysis? This wrapper also facilitates such sensitivity analysis in the study of extremeevents.
+
+In the next few sections, we replicate one subsection of results from Patnaik, Shah and Singh (2013) % TODO: bibtex citation.
+that studies whether extreme events on the S\&P 500 affects returns on the domestic Indian stock market measured by the Nifty Index. A detailed mathematical overview of the methodology is also available in the paper.
+
+
\section{Extreme event analysis}
This function needs input in returns format on which extreme
event analysis is to be done. Further, we define tail events for given
@@ 44,75 +47,69 @@
library(eventstudies)
data(eesData)
input < eesData$sp500
# Suppress messages
deprintize<function(f){
return(function(...) {capture.output(w<f(...));return(w);});
}
output < deprintize(ees)(input, prob.value=5)
@
% I don't understand this output. Maybe you should explain what it means.
The output is a list and consists of summary statistics for complete
dataset, extreme event analysis for lower tail and extreme event
analysis for upper tail. Further, these lower tail and upper tail list
objects consists of 5 more list objects with following output:
+
+As mentioned earlier, one of the most important aspect of a nonparametric approach to
+an event study analysis is if the parameters for such an exercise is validated by the general summary statistics of the data set being used. The object \texttt{output} is a list of various relevant summary statistics for the data set, and with an extreme event analysis for lower and upper tails. For each of the tails, the following statistics are available:
+
\begin{enumerate}
\item Extreme events dataset
\item Distribution of clustered and unclustered % events.
\item Run length distribution
\item Quantile values of extreme events
\item Yearly distribution of extreme events
+\item Extreme events data set (The input for event study analysis)
+\item Distribution of clustered and unclustered tail events
+\item Distribution of the run length
+\item Quantile values of tail events
+\item Yearly distribution of tail events
\end{enumerate}
\subsection{Summary statistics}
Here we have data summary for the complete dataset which shows
minimum, 5\%, 25\%, median, mean, 75\%, 95\%, maximum, standard
deviation (sd), interquartile range (IQR) and number of
observations. The output shown below matches with the fourth column
in Table 1 of the paper.
+
+In \textt{output\$data.summary}, we present the minimum, maximum, interquartile range (IQR), standard deviation (sd), and the distribution at 5\%, 25\%, Median, Mean, 75\%, and 95\%. This analysis for the S\&P 500 is identical to the results presented in Table 1 of Patnaik, Shah and Singh (2013).
+
<<>>==
output$data.summary
@
+
\subsection{Extreme events dataset}
+
The output for upper tail and lower tail are in the same format as
mentioned above. The dataset is a time series object which has 2
columns. The first column is \textit{event.series} column which has
returns for extreme events and the second column is
\textit{cluster.pattern} which signifies the number of consecutive
days in the cluster. Here we show results for the lower tail of S\&P
500. Below is the extreme event data set on which analysis is done.
+mentioned above. The data set is a time series object with 2
+columns; the first column \textit{event.series} contains
+returns for extreme events and the second column \textit{cluster.pattern} records the number of consecutive days in the cluster. Here we show results for the lower tail of S\&P 500.
+
+% TODO: Show this data set: head(...) with the column ``event.series'' and ``cluster.pattern'' before the str(...) below.
+
+The overall dataset looks as follows:
+
<<>>=
str(output$lower.tail$data)
@
\subsection{Distribution of clustered and unclustered events}
In the analysis we have clustered, unclustered and mixed clusters. We
remove the mixed clusters and study the rest of the clusters by fusing
them. Here we show, number of clustered and unclustered data used in
the analysis. The \textit{removed.clstr} refers to mixed cluster which
are removed and not used in the analysis.\textit{Tot.used} represents
total number of extreme events used for the analysis which is sum of
\textit{unclstr} (unclustered events) and \textit{used.clstr} (Used
clustered events). \textit{Tot}
are the total number of extreme events in the data set. The results
shown below match with second row in Table 2 of the paper.
+
+There are several types of clusters in an analysis of extreme events. Clusters that are purely on either of the tails, or are mixed. Events that have mixed clusters typically witness large upward swing in the outcome variable, and soon after witness a reversal of such an occurence. This ``contamination'' might cause serious downward bias in the magnitude and direction of impact due to an extreme event. Therefore, it will be useful to ensure that such occurrences are not included in the analysis.\footnote{While this is an interesting subject matter all by itself, it is not entirely useful in an analysis of extreme events since any inference with such data will be contaminated.}
+
+Results from Table 2 of Patnaik, Shah and Singh (2013) show that there are several mixed clusters in the data set. In other words, there are many events on the S\&P 500 that provide large positive (negative) returns followed by large negative (positive) returns in the data set. As we look closely at the lower tail events in this vignette, the output for the lower tail events looks like this:
+
<<>>=
output$lower.tail$extreme.event.distribution
@
+``\texttt{unclstr}'' refers to unclustered events, ``\texttt{used.clstr}'' refers to the clusters that are pure and uncontaminated by mixed tail events, ``\texttt{removed.clstr}'' refers to the mixed clusters. For the analysis in Patnaik, Shah and Singh (2013) only 62 out of 102 events are used. These results are identical to those documented in Table 2 of the paper.
+
\subsection{Run length distribution of clusters}
Clusters used in the analysis are defined as consecutive extreme
events. Run length shows total number of clusters with \textit{n} consecutive
days. In the example below we have 3 clusters with \textit{two}
consecutive events and 0 clusters with \textit{three} consecutive
events. The results shown below match with second row in Table 3 of
the paper.
+
+The next concern is the run length distribution of clusters used in the analysis. Run length shows the total number of clusters with \textit{n} consecutive days of its occurence. In the example used here, we have 3 clusteres with \textit{two} consecutive events and 0 clusters with \textit{three} consecutive events. This is also identical the one presented in the paper by Patnaik, Shah and Singh (2013).
+
<<>>=
output$lower.tail$runlength
@
\subsection{Extreme event quantile values}
Quantile values show 0\%, 25\%, median, 75\%,100\% and mean values for
the extreme events data. The results shown below match with second row
+the extreme events data. The results shown below match the second row
of Table 4 in the paper.
<<>>=
output$lower.tail$quantile.values
@@ 121,46 +118,23 @@
\subsection{Yearly distribution of extreme events}
This table shows the yearly distribution and
the median value for extreme events data. The results shown below
match with third and forth column for S\&P 500 in the Table 5 of the
+are in line with the third and forth column for S\&P 500 in the Table 5 of the
paper.
+
<<>>=
output$lower.tail$yearly.extreme.event
@
+
The yearly distribution for extreme events include unclustered event
and clustered events which are fused. While in extreme event distribution of
clustered and unclustered event, the clustered events are defined as
total events in a cluster. For example, if there is a clustered event
with three consecutive extreme events then yearly distribution will
treat it as one single event. Here below the relationship between the
Tables is explained through equations:\\\\
\textit{Sum of yearly distribution for lower tail = 59 \\
Unclustered events for lower tail = 56\\\\
Clustered events for lower tail = 3 + 0\\
Total events in clusters (Adding number of events in each cluster)
= 3*2 + 0*3 = 6\\
Total used events = Unclustered events for lower tail + Total events
in clusters \\ = 56 + 6 = 62 \\\\
Sum of yearly distribution for lower tail = Unclustered events for
lower tail + Total events in clusters\\ = 56 + 3 =59}
<<>>=
sum(output$lower.tail$yearly.extreme.event[,"number.lowertail"])
output$lower.tail$extreme.event.distribution[,"unclstr"]
output$lower.tail$runlength
@
+with three consecutive extreme events then we treat that as a single event for analysis.
\section{Extreme event study plot}
Here, we replicate the Figure 7, from the paper Patnaik, Shah and
Singh (2013). First, we need to have a merged time series object with
event series and response series with no missing values for unerring
results. After getting the time series object we just need to use the
following function and fill the relevant arguments to generate the
extreme event study plot.
The function generates extreme values for the event series with the
given probability value. Once the values are generated, clustered
extreme events are fused together for the response series and
extreme evenstudy plot is generated for very bad and very good
events. The detail methodology is mentioned in the paper.
+One of the most attractive feature of an event study is its graphical representation. With the steps outlined in the \texttt{eventstudies} vignette, the wrapper \texttt{eesPlot} in the package provides a convenient user interface to replicate Figure 7 from Patnaik, Shah and Singh (2013). The plot presents events on the upper tail as ``Very good'' and lower tail as ``Very bad'' on the event variable S\&P 500. The outcome variable studied here is the Nifty, and the yaxis presents the cumulative returns in Nifty. This is an event graph, where data is centered on event date (``0'') and the graph shows 4 days before and after the event.
+
<<>>=
eesPlot(z=eesData, response.series.name="nifty", event.series.name="sp500",
titlestring="S&P500", ylab="(Cum.) change in NIFTY", prob.value=5,
@@ 179,11 +153,12 @@
\end{figure}
\section{Computational details}
The package code is purely written in R. It has dependencies to zoo
+The package code is written in R. It has dependencies to zoo
(\href{http://cran.rproject.org/web/packages/zoo/index.html}{Zeileis
2012}) and boot
(\href{http://cran.rproject.org/web/packages/boot/index.html}{Ripley
2013}). R itself as well as these packages can be obtained from \href{http://CRAN.Rproject.org/}{CRAN}.
+
%\section{Acknowledgments}
\end{document}
From noreply at rforge.rproject.org Mon Oct 28 20:16:50 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Mon, 28 Oct 2013 20:16:50 +0100 (CET)
Subject: [Eventstudiescommits] r148  pkg/vignettes
MessageID: <20131028191650.7F19D185396@rforge.rproject.org>
Author: vimsaa
Date: 20131028 20:16:50 +0100 (Mon, 28 Oct 2013)
New Revision: 148
Modified:
pkg/vignettes/ees.Rnw
pkg/vignettes/eventstudies.Rnw
Log:
Editing for next version of eventstudies complete.
Modified: pkg/vignettes/ees.Rnw
===================================================================
 pkg/vignettes/ees.Rnw 20131028 16:39:42 UTC (rev 147)
+++ pkg/vignettes/ees.Rnw 20131028 19:16:50 UTC (rev 148)
@@ 8,7 +8,7 @@
\usepackage{parskip}
\usepackage{amsmath}
\title{Introduction to the \textbf{extreme events} functionality}
\author{Ajay Shah, Vimal Balasubramaniam and Vikram Bahure}
+\author{Vikram Bahure \and Vimal Balasubramaniam \and Ajay Shah}
\begin{document}
%\VignetteIndexEntry{eventstudies: Extreme events functionality}
% \VignetteDepends{}
@@ 27,8 +27,7 @@
\SweaveOpts{engine=R,pdf=TRUE}
\section{Introduction}
An extremeevent analysis is an analysis of an outcome variable surrounding
a tail (either right or left tail) event on another variable. This \textit{eventstudies} package includes an extreme events
+An extremeevent analysis is an analysis of an outcome variable surrounding a tail (either right or left tail) event on another variable. This \textit{eventstudies} package includes an extreme events
functionality as a wrapper in \texttt{ees}.
There are several concerns with an extremeevent analysis. Firstly, what happens when multiple tail events (``Clustered events'') occur within one another? We facilitate this analysis with summary statistics on the distribution and run length of events, quantile values to determine ``tail events'', and yearly distribution of the extremeevents. Secondly, do results change when we use ``clustered events'' and ``unclustered events'' separately, or, together in the same analysis? This wrapper also facilitates such sensitivity analysis in the study of extremeevents.
@@ 108,28 +107,19 @@
@
\subsection{Extreme event quantile values}
Quantile values show 0\%, 25\%, median, 75\%,100\% and mean values for
the extreme events data. The results shown below match the second row
of Table 4 in the paper.
+Quantile values show 0\%, 25\%, median, 75\%,100\% and mean values for the extreme events data. The results shown below match the second row of Table 4 in the paper.
<<>>=
output$lower.tail$quantile.values
@
\subsection{Yearly distribution of extreme events}
This table shows the yearly distribution and
the median value for extreme events data. The results shown below
are in line with the third and forth column for S\&P 500 in the Table 5 of the
paper.
+This table shows the yearly distribution and the median value for extreme events data. The results shown below are in line with the third and forth column for S\&P 500 in the Table 5 of the paper.
<<>>=
output$lower.tail$yearly.extreme.event
@
The yearly distribution for extreme events include unclustered event
and clustered events which are fused. While in extreme event distribution of
clustered and unclustered event, the clustered events are defined as
total events in a cluster. For example, if there is a clustered event
with three consecutive extreme events then we treat that as a single event for analysis.
+The yearly distribution for extreme events include unclustered event and clustered events which are fused. While in extreme event distribution of clustered and unclustered event, the clustered events are defined as total events in a cluster. For example, if there is a clustered event with three consecutive extreme events then we treat that as a single event for analysis.
\section{Extreme event study plot}
Modified: pkg/vignettes/eventstudies.Rnw
===================================================================
 pkg/vignettes/eventstudies.Rnw 20131028 16:39:42 UTC (rev 147)
+++ pkg/vignettes/eventstudies.Rnw 20131028 19:16:50 UTC (rev 148)
@@ 8,7 +8,7 @@
\usepackage{parskip}
\usepackage{amsmath}
\title{Introduction to the \textbf{eventstudies} package in R}
\author{Vikram Bahure and Renuka Sane and Ajay Shah\thanks{We thank
+\author{Vikram Bahure \and Vimal Balasubramaniam \and Ajay Shah\thanks{We thank
Chirag Anand for valuable inputs in the creation of this vignette.}}
\begin{document}
% \VignetteIndexEntry{eventstudies: A package with functionality to do Event Studies}
@@ 18,10 +18,10 @@
\maketitle
\begin{abstract}
 Event study analysis is a ubiquitous tool in the econometric
 analysis of an event and its impact on the measured
+ Event study analysis is an important tool in the econometric
+ analysis of an event and its impact on a measured
outcome. Although widely used in finance, it is a generic tool
 that can be used for other purposes as well. There is, however,
+ that can be used in other disciplines as well. There is, however,
no single repository to undertake such an analysis with
R. \texttt{eventstudies} provides the toolbox to carry out an
eventstudy analysis. It contains functions to transform data
@@ 34,147 +34,84 @@
\section{Introduction}
Event study methodology has been primarily used to evaluate the
impact of specific events on the value of a firm. The typical
procedure for conducting an event study involves
+Event study methodology has been primarily used to evaluate the impact of specific events on the value of a firm. The typical procedure for conducting an event study involves
\citep{MacKinlay1997}:
\begin{enumerate}
\item Defining the event of interest and the event window. The
 event window should be larger than the specific period of
 interest.
\item Determining a measure of abnormal returns, the most common
 being the \textit{constant mean return model} and the
 \textit{market model}. This is important to disentangle the
 effects on stock prices of information that is specific to the
 firm under question (e.g. stock split announcement) and
 information that is likely to affect all stock prices
 (e.g. interest rates).
+\item Defining the event of interest and the event window. The event window should be larger than the specific period of interest.
+\item Determining a measure of abnormal returns, the most common being the \textit{constant mean return model} and the \textit{market model}. This is important to disentangle the effects on stock prices of information that is specific to the firm under question (e.g. stock split announcement) and information that is likely to affect all stock prices (e.g. interest rates).
\item Analysis of firm returns on or after the event date.
\end{enumerate}
The \textbf{eventstudies} package brings together the various
aspects of an event study analysis in one package. It provides for
functions to calculate returns, transform data into eventtime,
and inference procedures. All functions in this package are
implemented in the R system for statistical computing. The
package, and R are available at no cost under the terms of the
general public license (GPL) from the comprehensive R archive
network (CRAN, \texttt{http://CRAN.Rproject.org}).
+The \textbf{eventstudies} package brings together the various aspects of an event study analysis in one package. It provides for functions to calculate returns, transform data into eventtime, and inference procedures. All functions in this package are implemented in the R system for statistical computing. The package, and R are available at no cost under the terms of the general public license (GPL) from the comprehensive R archive network (CRAN, \texttt{http://CRAN.Rproject.org}).
This paper is organised as follows. A skeletal event study model
is presented in Section \ref{s:model}. Section \ref{s:approach}
discusses the software approach used in this package. Section
\ref{s:example} shows an example.
+This paper is organised as follows. A skeletal event study model is presented in Section \ref{s:model}. Section \ref{s:approach} discusses the software approach used in this package. Section \ref{s:example} shows an example.
\section{Skeletal event study model} \label{s:model}
In this section, we present a model to evaluate the impact of
stock splits on returns \citep{Corrado2011}.
+In this section, we present a model to evaluate the impact of stock splits on returns \citep{Corrado2011}.
Let day $0$ identify the stock split date under scrutiny and let
days t = $...,3,2,1$ represent trading days leading up to the
event. If the return on the firm with the stock split $R_o$ is
statistically large compared to returns on previous dates, we may
conclude that the stock split event had a significant price
impact.
+Let day $0$ identify the stock split date under scrutiny and let days t = $...,3,2,1$ represent trading days leading up to the event. If the return on the firm with the stock split $R_o$ is statistically large compared to returns on previous dates, we may conclude that the stock split event had a significant price impact.
To disentangle the impact of the stock split on the returns of the
firm from general marketwide information, we use the marketmodel
to adjust the eventdate return, thus removing the influence of
market information.
+To disentangle the impact of the stock split on the returns of the firm from general marketwide information, we use the marketmodel to adjust the eventdate return, thus removing the influence of market information.
The market model is calculated as follows:
\[ R_t = a + b RM_t + e_t \]
The firmspecific return $e_t$ is unrelated to the overall market
and has an expected value of zero. Hence, the expected event date
return conditional on the event date market return is
+The firmspecific return $e_t$ is unrelated to the overall market and has an expected value of zero. Hence, the expected event date return conditional on the event date market return is
\[ E(R_0RM_0) = a + b RM_0 \]
The abnormal return $A_0$ is simply the dayzero firmspecific
return $e_0$:
+The abnormal return $A_0$ is simply the dayzero firmspecific return $e_0$:
\[ A_0 = R_0 E(R_0RM_0) = R_0  a  b RM_0 \]
A series of abnormal returns from previous periods are also
calculated for comparison, and to determine statistical
significance.
+A series of abnormal returns from previous periods are also calculated for comparison, and to determine statistical significance.
\[ A_t = R_t E(R_tRM_t) = R_t  a  b RM_t \]
The event date abnormal return $A_0$ is then assessed for
statistical significance relative to the distribution of abnormal
returns $A_t$ in the control period. A common assumption used to
formulate tests of statistical significance is that abnormal
returns are normally distributed.
+The event date abnormal return $A_0$ is then assessed for statistical significance relative to the distribution of abnormal returns $A_t$ in the control period. A common assumption used to formulate tests of statistical significance is that abnormal returns are normally distributed. However, such distributional assumptions may not be necessary with nonparametric procedures. For detailed exposition on the theoretical framework of eventstudies, please refer to % Insert Corrado (2011) and Campbell, Lo, McKinlay ``Econometrics of Financial Markets''
\section{Software approach} \label{s:approach}
\textbf{eventstudies} offers the following functionalities:
\begin{itemize}
\item Models for calculating returns
\item Procedures for converting data to eventtime and remapping
 eventframe
+\item Models for calculating idiosyncratic returns
+\item Procedures for converting data from physical time into event time
\item Procedures for inference
\end{itemize}
\subsection{Models for calculating returns}
+\subsection{Models for calculating idiosyncratic returns}
Firm returns can be calculated using the following functions:
\begin{itemize}
\item \texttt{excessReturn}: estimation of excess returns
 i.e. $R_j  R_m$ where $R_j$ is the return of firm $j$ and $R_m$
 is the market return.
+\item \texttt{excessReturn}: estimation of excess returns i.e. $R_j  R_m$ where $R_j$ is the return of firm $j$ and $R_m$ is the market return.
\item \texttt{marketResidual}: estimation of market model to
 obtain idiosyncratic firm returns, controlling for the market
 returns.
+\item \texttt{marketResidual}: estimation of market model to obtain idiosyncratic firm returns, controlling for the market returns.
\item \texttt{AMM}: estimation of the augmented market model which
 provides user the capability to run a multivariate market model
 with orthogonalisation and obtain idiosyncratic returns.
+\item \texttt{AMM}: estimation of the augmented market model which provides user the capability to run a multivariate market model with orthogonalisation and obtain idiosyncratic returns.
\end{itemize}
The function \texttt{AMM} is a generic function that allows users
to run an augmented market model and undertake the analysis of the
market model in a multivariate setting and obtain the idiosyncratic returns.
Often times, there is a need for an auxilliary regression that purges the effect of the explanatory
variables on one another. This function allows for the estimation of such a residual for a single
firm using the function \texttt{onefirmAMM}. Advanced users may also want to look at \texttt{manyfirmsAMM}.
+The function \texttt{AMM} is a generic function that allows users to run an augmented market model and undertake the analysis of the market model in a multivariate setting and obtain idiosyncratic returns. Often times, there is a need for an auxilliary regression that purges the effect of the explanatory variables on one another. This function allows for the estimation of such a residual for a single firm using the function \texttt{onefirmAMM}. Advanced users may also want to look at \texttt{manyfirmsAMM}.
The output from all these models are also time series objects of class ``zoo'' or ``xts''. This
becomes the input for the remaining steps in the event study analysis, of which the first step is to convert a timeseries object into the eventtime frame.
+The output from all these models are also time series objects of class ``zoo'' or ``xts''. This becomes the input for the remaining steps in the event study analysis, of which the first step is to convert a timeseries object into the eventtime frame.
\subsection{Converting the dataset to an event time}
+\subsection{Converting data from physical time into event time}
The conversion of the returns data to eventtime, and to
cumulate returns is done using the following functions:
+The conversion of the returns data to eventtime, and to cumulate returns is done using the following functions:
\begin{itemize}
\item \texttt{phys2eventtime}: conversion to an event frame. This
 requires a time series object of stock price returns (our outcome variable) and a
 data frame with two columns \textit{outcome.unit} and \textit{event.date}, the
 firms and the date on which the event occurred respectively.
+\item \texttt{phys2eventtime}: conversion to an event frame. This requires a time series object of stock price returns (our outcome variable) and a data frame with two columns \textit{outcome.unit} and \textit{event.date}, the firms and the date on which the event occurred respectively.
\item \texttt{remap.cumsum}: conversion of returns to cumulative
 returns. The input for this function is the timeseries object in
 eventtime that is obtained as the output from \texttt{phys2eventtime}.
+\item \texttt{remap.cumsum}: conversion of returns to cumulative returns. The input for this function is the timeseries object in eventtime that is obtained as the output from \texttt{phys2eventtime}.
\end{itemize}
The function \texttt{phys2eventtime} is generic and can handle objects of any time frequency,
including intraday high frequency data. While \texttt{remap.cumsum} is sufficiently general
to be used on any time series object for which we would like to obtain cumulative values, in
this context, the attempt is to cumulate idiosyncratic returns to obtain a clear identification
of the magnitude and size of the impact of an event. % TODO: Cite Brown and Warner (1983) here.
+The function \texttt{phys2eventtime} is generic and can handle objects of any time frequency, including intraday high frequency data. While \texttt{remap.cumsum} is sufficiently general to be used on any time series object for which we would like to obtain cumulative values, in this context, the attempt is to cumulate idiosyncratic returns to obtain a clear identification of the magnitude and size of the impact of an event. % TODO: Cite Brown and Warner (1983) here.
At this point of analysis, we hold one important data object organised in eventtime, where each
column of this object corresponds to the event on the outcome unit, with values before and after the
event organised as before and after $T,(T1),...,3,2,1,0,1,2,3,...,T1,T$. The package, once again, is very general and allows users to decide on the number of time units before and after the event that must be used for statistical inference.
+At this point of analysis, we hold one important data object organised in eventtime, where each column of this object corresponds to the event on the outcome unit, with values before and after the event organised as before and after $T,(T1),...,3,2,1,0,1,2,3,...,T1,T$. The package, once again, is very general and allows users to decide on the number of time units before and after the event that must be used for statistical inference.
\subsection{Procedures for inference}
@@ 188,54 +125,37 @@
generate the distribution of cumulative returns series.
\end{itemize}
The second stage in the analysis of eventstudies is statistical inference. At present, we have two different inference procedures incorporated into the package. The first of the two, \texttt{inference.wilcox} is a traditional test of inference for eventstudies. The second inference procedure, \texttt{inference.bootstrap} is another nonparametric procedure that exploits the multiplicity of outcome units for which such an event has taken place. For example, a corporate action event such as stock splits may have taken place for many firms (outcome units) at different points in time. This crosssectional variation in outcome is exploited by the bootstrap inference procedure.
+The last stage in the analysis of eventstudies is statistical inference. At present, we have two different inference procedures incorporated into the package. The first of the two, \texttt{inference.wilcox} is a traditional test of inference for eventstudies. The second inference procedure, \texttt{inference.bootstrap} is another nonparametric procedure that exploits the multiplicity of outcome units for which such an event has taken place. For example, a corporate action event such as stock splits may have taken place for many firms (outcome units) at different points in time. This crosssectional variation in outcome is exploited by the bootstrap inference procedure.
The inference procedures would generally require no more than the object generated in the first stage of our analysis, for instance, the cumulative returns in eventtime (\texttt{es.w}), and whether the user wants a plot of the results using the inference procedure used.
+The inference procedures would generally require no more than the object generated in the second stage of our analysis, for instance, the cumulative returns in eventtime (\texttt{es.w}), and will ask whether the user wants a plot of the results using the inference procedure used.
We intend to expand the suite of inference procedures available for analysis to include the more traditional procedures such as the Patell $ttest$.
+We intend to expand the suite of inference procedures available for analysis to include the more traditional procedures such as the Patell $t$test.
\section{Example: Performing eventstudy analysis}
\label{s:example}
+\section{Performing eventstudy analysis: An example}\label{s:example}
In this section, we demonstrate the package with a study of the impact of stock
splits on the stock prices of firms. We use the returns series of
the thirty index companies, as of 2013, of the Bombay Stock
Exchange (BSE), from 2001 to 2013. We have stock split dates for
each firm from 2000 onwards.
+In this section, we demonstrate the package with a study of the impact of stock splits on the stock prices of firms. We use the returns series of the thirty index companies, as of 2013, of the Bombay Stock Exchange (BSE), between 2001 and 2013. We also have stock split dates for each firm since 2000.
Our data consists of a \textit{zoo} object for stock price returns
for the thirty firms. This is called \textit{StockPriceReturns}
and another zoo object, \textit{nifty.index}, of market returns.
+Our data consists of a \textit{zoo} object for stock price returns for the thirty firms. This is our ``outcome variable'' of interest, and is called \textit{StockPriceReturns}. Another zoo object, \textit{nifty.index}, contains a time series of market returns.
<<>>=
library(eventstudies)
data(StockPriceReturns)
data(nifty.index)
str(StockPriceReturns)
## head(StockPriceReturns)
head(StockPriceReturns[rowSums(is.na((StockPriceReturns)))==3,1:3])
head(nifty.index)
@
The dates of interest and the firms on which the event occurred
are stored in a data frame, \textit{SplitDates} with two columns
\textit{unit}, the name of the firms, and \textit{when}, the date
of the occurrence of the event. \textit{unit} should be in
\textit{character} format and \textit{when} in \textit{Date}
format.
+As required by the package, the event date (the dates on which stock splits occured for these 30 firms) for each firm is recorded in \textit{SplitDates} where ``outcome.unit'' is the name of the firm (the column name in ``StockPriceReturns'') and ``event.date'' is when the event took place for that outcome unit. In R, the column ``outcome.unit'' has to be of class ``character'' and ``event.date'' of class ``Date'', as seen below:
<<>>=
data(SplitDates)
head(SplitDates)
@
\subsection{Calculating returns}
+\subsection{Calculating idiosyncratic returns}
The function \texttt{excessReturn} calculates the excess returns
while \texttt{marketResidual} calculates the market model. The two
inputs are \texttt{firm.returns} and \texttt{market.returns}. The
results are stored in \texttt{er.result} and \texttt{mm.result}
respectively.
+Calculating returns, though straightforward, can be done in a variety of different ways. The function \texttt{excessReturn} calculates the excess returns while \texttt{marketResidual} calculates the market model. The two inputs are \texttt{firm.returns} and \texttt{market.returns}. The results are stored in \texttt{er.result} and \texttt{mm.result} respectively. These are the standard idiosyncratic return estimation that is possible with this package.
<<>>= # Excess return
er.result < excessReturn(firm.returns = StockPriceReturns, market.returns = nifty.index)
@@ 255,16 +175,9 @@
@
The \texttt{AMM} model requires a timeseries of the exchange rate
along with firm returns and market returns. This is done by
loading the \textit{inr} data, which is the INRUSD exchange rate
for the same period. The complete dataset consisting of stock
returns, market returns, and exchange rate is first created.
+To provide flexibility to users, a general multivariate framework to estimate idiosyncratic returns, the augmented market model, is also available. In this case, we would like to purge any currency returns from the outcome return of interest, and the \textit{apriori} expectation is that the variance of the residual is reduced in this process. In this case, the \texttt{AMM} model requires a timeseries of the exchange rate along with firm returns and market returns. This is done by loading the \textit{inr} data, which is the INRUSD exchange rate for the same period. The complete data set consisting of stock returns, market returns, and exchange rate is first created.
The inputs into the \texttt{AMM} model also include
\texttt{firm.returns} and \texttt{market.returns}. Currency
returns can be specified using \texttt{others}. Two types of the
AMM model are supported: \textit{residual} and \textit{all}.
+Inputs into the \texttt{AMM} model also include \texttt{firm.returns} and \texttt{market.returns}. Currency returns can be specified using \texttt{others}. In a general case, this proves to be a multivariate specification with the flexibility to run auxiliary regressions to specify the regression appropriately.
% AMM model
<<>>=
@@ 285,15 +198,9 @@
@
\subsection{Conversion to event frame}
+\subsection{Conversion to eventtime frame}
For conversion to event time, the event date and the returns on
that date are indexed to 0. Postevent dates are indexed as
positive, and preevent dates as negative. The conversion is done
using the \texttt{phys2eventtime} function. The function requires
a returns series, \textit{StockPriceReturns}, a dataframe with
event unit and time, \textit{SplitDates}, and the width for
creating the eventframe.
+The conversion from physical time into event time combines the two objects we have constructed till now: \textit{SplitDates} and \textit{StockPriceReturns}. These two objects are input matrices for the function \texttt{phys2eventtime}. With the specification of ``width=10'' in the function, we require phys2eventtime to define a successfull unit entry (an event) in the result as one where there is no missing data for 10 days before and after the event. This is marked as ``success'' in the resulting list object. With data missing, the unit is flagged ``wdatamissing''. In case the event falls outside of the range of physical time provided in the input data, the unit entry will be flagged ``wrongspan'' and if the unit in \textit{SplitDates} is missing in \textit{StockPriceReturns}, we identify these entries as ``unitmissing''. This allows the user to identify successful entries in the sample for an analysis based on event time. In this example, we make use of successful entries in the data and the output object is stored as \textit{es.w}:
<<>>=
es < phys2eventtime(z=StockPriceReturns, events=SplitDates,
@@ 307,50 +214,27 @@
es.w[,1]
@
The output for \texttt{phys2eventtime} is a list. The first
element of a list is a time series object which is converted to
event time.
+The identification of impact of such an event on returns is better represented with cumulative returns as the outcome variable. We cumulate returns on this (event) time series object, by applying the function \texttt{remap.cumsum}.
The second element shows the \textit{outcome} of the
conversion. If the outcome is \textit{success} then all is well
with the given window as specified by the width. If there are too
many NAs within the event window, the outcome is
\textit{wdatamissing}. The outcome for the event date not being
within the span of data for the unit is \textit{wrongspan} while
the outcome if a unit named in events is not in the returns data
is \textit{unitmissing}.

In the example described here, es.w contains the returns in
eventtime form for all the stocks. It contains variables for whom
all data is available.

Once the returns are converted to eventtime,
\texttt{remap.cumsum} function is used to convert the returns to
cumulative returns.

<<>>=
es.cs < remap.cumsum(es.w,is.pc=FALSE,base=0)
es.cs[,1]
@
+The last stage in the analysis of an eventstudy is that of obtaining statistical confidence with the result by using different statistical inference procedures.
+
\subsection{Inference procedures}
\subsubsection{Bootstrap inference}
After converting to event frame and estimating the variable of
interest, we need to check the stability of the result and derive
other estimates like standard errors and confidence intervals. For
this, we generate the sampling distribution for the estimate using
bootstrap inference. A detailed explanation of the methodology is
presented in \citep{PatnaikShahSingh2013}. This specific approach
used here is based on \citet{davison1986efficient}.
+While the package is sufficiently generalised to undertake a wide array of inference procedures, at present it contains only two inference procedures: 1/ The bootstrap and 2/ Wilcoxon Rank test. We look at both in turn below:
The \textit{inference.bootstrap} function does the bootstrap to
generate distribution of $\overline{CR}$. The bootstrap generates
confidence interval at 2.5 percent and 97.5 percent for the
estimate.
+\subsubsection{Bootstrap inference}
+We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citep{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
+\textit{inference.bootstrap} performs the bootstrap to generate distribution of $\overline{CR}$. The bootstrap generates confidence interval at 2.5 percent and 97.5 percent for the estimate.
+
<<>>=
result < inference.bootstrap(es.w=es.cs, to.plot=TRUE)
+print(result)
@
\begin{figure}[t]
@@ 367,11 +251,13 @@
\end{figure}
\subsubsection{Wilcoxon signed rank test}
We next compute the Wilcoxon signed rank test, which is a
nonparametric inference test to compute the confidence interval.
+Another nonparametric inference available and is used widely with event study analysis is the Wilcoxon signed rank test. This package provides a wrapper that uses the function \texttt{wilcox.test} in \texttt{stats}.
+
<<>>=
result < inference.wilcox(es.w=es.cs, to.plot=TRUE)
+result
@
+
\begin{figure}[t]
\begin{center}
\caption{Stock splits event and response of respective stock
@@ 387,9 +273,7 @@
\subsection{General eventstudy function}
\texttt{eventstudy} is a wrapper around all the internal
functions. Several examples of the use of this function are
provided below.
+While the general framework to perform an eventstudy analysis has been explained with an example in detail, the package also has a wrapper that makes use of all functions explained above to generate the end result for analysis. While this is a quick mechanism to study events that fit this style of analysis, we encourage users to make use of the core functionalities to extend and use this package in ways the wrapper does not capture. Several examples of this wrapper \texttt{eventstudy}, is provided below for convenience:
<<>>=
## Event study without adjustment
@@ 426,6 +310,7 @@
@
+The analysis of events has a wide array of tools and procedures available in the econometric literature. The objective in this package is to start with a core group of functionalities that deliver the platform for event studies following which we intend to extend and facilitate more inference procedures for use from this package.
\section{Computational details}
The package code is written in R. It has dependencies to
@@ 436,8 +321,14 @@
2013}). R itself as well as these packages can be obtained from
\href{http://CRAN.Rproject.org/}{CRAN}.
+
+
+
%\section{Acknowledgments}
+
+
+
% \newpage
\bibliographystyle{jss} \bibliography{es}
From noreply at rforge.rproject.org Tue Oct 29 04:32:28 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Tue, 29 Oct 2013 04:32:28 +0100 (CET)
Subject: [Eventstudiescommits] r149  pkg/vignettes
MessageID: <20131029033228.25A3C185FBE@rforge.rproject.org>
Author: chiraganand
Date: 20131029 04:32:26 +0100 (Tue, 29 Oct 2013)
New Revision: 149
Modified:
pkg/vignettes/ees.Rnw
Log:
Fixed a typo in the vignette.
Modified: pkg/vignettes/ees.Rnw
===================================================================
 pkg/vignettes/ees.Rnw 20131028 19:16:50 UTC (rev 148)
+++ pkg/vignettes/ees.Rnw 20131029 03:32:26 UTC (rev 149)
@@ 65,7 +65,7 @@
\subsection{Summary statistics}
In \textt{output\$data.summary}, we present the minimum, maximum, interquartile range (IQR), standard deviation (sd), and the distribution at 5\%, 25\%, Median, Mean, 75\%, and 95\%. This analysis for the S\&P 500 is identical to the results presented in Table 1 of Patnaik, Shah and Singh (2013).
+In \texttt{output\$data.summary}, we present the minimum, maximum, interquartile range (IQR), standard deviation (sd), and the distribution at 5\%, 25\%, Median, Mean, 75\%, and 95\%. This analysis for the S\&P 500 is identical to the results presented in Table 1 of Patnaik, Shah and Singh (2013).
<<>>==
output$data.summary
From ajayshah at mayin.org Tue Oct 29 04:33:41 2013
From: ajayshah at mayin.org (Ajay Shah)
Date: Tue, 29 Oct 2013 09:03:41 +0530
Subject: [Eventstudiescommits] r149  pkg/vignettes
InReplyTo: <20131029033228.25A3C185FBE@rforge.rproject.org>
References: <20131029033228.25A3C185FBE@rforge.rproject.org>
MessageID:
On 29 Oc95\%. This analysis for the S\&P 500 is identical to the results
presented in Table 1 of Patnaik, Shah and Singh (2013).
> +In \texttt{output\$data.summary}, we present the minimum, maximum,
> interquartile range (IQR), standard deviation (sd), and the distribution
> at 5\%, 25\%, Median, Mean, 75\%, and 95\%. This analysis for the S\&P 500
> is identical to the results presented in Table 1 of Patnaik, Shah and Singh
> (2013).
No biblatex or bibtex engine?
Vimal: Have you switched to biblatex yet? It rulez.

Ajay Shah
ajayshah at mayin.org
http://www.mayin.org/ajayshah
http://ajayshahblog.blogspot.com
 next part 
An HTML attachment was scrubbed...
URL:
From vimsaa at gmail.com Tue Oct 29 10:10:55 2013
From: vimsaa at gmail.com (Vimal Balasubramaniam)
Date: Tue, 29 Oct 2013 09:10:55 +0000
Subject: [Eventstudiescommits] r149  pkg/vignettes
InReplyTo:
References: <20131029033228.25A3C185FBE@rforge.rproject.org>
MessageID:
On 29 October 2013 03:33, Ajay Shah wrote:
> No biblatex or bibtex engine?
>
> Vimal: Have you switched to biblatex yet? It rulez.
>
I should!

Vimal Balasubramaniam
+44 755 750 4880
+91 981 829 8975
 next part 
An HTML attachment was scrubbed...
URL:
From noreply at rforge.rproject.org Tue Oct 29 10:26:56 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Tue, 29 Oct 2013 10:26:56 +0100 (CET)
Subject: [Eventstudiescommits] r150  pkg/vignettes
MessageID: <20131029092656.F200F18549F@rforge.rproject.org>
Author: vikram
Date: 20131029 10:26:56 +0100 (Tue, 29 Oct 2013)
New Revision: 150
Modified:
pkg/vignettes/es.bib
pkg/vignettes/eventstudies.Rnw
Log:
Added bib file
Modified: pkg/vignettes/es.bib
===================================================================
 pkg/vignettes/es.bib 20131029 03:32:26 UTC (rev 149)
+++ pkg/vignettes/es.bib 20131029 09:26:56 UTC (rev 150)
@@ 15,11 +15,33 @@
volume = 51,
pages = {207234}}
 at Article{,
 author = {PatnaikShahSingh2013},
+ at Article{PSS2013,
+ author = {Patnaik, Ila and Shah, Ajay and Singh, Nirvikar},
title = {Foreign Investors Under Stress: Evidence from India },
 journal = {IMF Working Paper},
 year = 2013
+ journal = {International Finance},
+ year = 2013,
+volume = 16,
+pages = {213244}
}
+ at article{dvison1986efficient,
+ title={Efficient bootstrap simulation},
+ author={DVISON, AC and Hinkley, DV and Schechtman, E},
+ journal={Biometrika},
+ volume={73},
+ number={3},
+ pages={555566},
+ year={1986},
+ publisher={Biometrika Trust}
+}
+ at article{brown1985using,
+ title={Using daily stock returns: The case of event studies},
+ author={Brown, Stephen J and Warner, Jerold B},
+ journal={Journal of financial economics},
+ volume={14},
+ number={1},
+ pages={331},
+ year={1985},
+ publisher={Elsevier}
+}
Modified: pkg/vignettes/eventstudies.Rnw
===================================================================
 pkg/vignettes/eventstudies.Rnw 20131029 03:32:26 UTC (rev 149)
+++ pkg/vignettes/eventstudies.Rnw 20131029 09:26:56 UTC (rev 150)
@@ 109,7 +109,13 @@
\item \texttt{remap.cumsum}: conversion of returns to cumulative returns. The input for this function is the timeseries object in eventtime that is obtained as the output from \texttt{phys2eventtime}.
\end{itemize}
The function \texttt{phys2eventtime} is generic and can handle objects of any time frequency, including intraday high frequency data. While \texttt{remap.cumsum} is sufficiently general to be used on any time series object for which we would like to obtain cumulative values, in this context, the attempt is to cumulate idiosyncratic returns to obtain a clear identification of the magnitude and size of the impact of an event. % TODO: Cite Brown and Warner (1983) here.
+The function \texttt{phys2eventtime} is generic and can handle objects
+of any time frequency, including intraday high frequency data. While
+\texttt{remap.cumsum} is sufficiently general to be used on any time
+series object for which we would like to obtain cumulative values, in
+this context, the attempt is to cumulate idiosyncratic returns to
+obtain a clear identification of the magnitude and size of the impact
+of an event \citep{brown1985using}.
At this point of analysis, we hold one important data object organised in eventtime, where each column of this object corresponds to the event on the outcome unit, with values before and after the event organised as before and after $T,(T1),...,3,2,1,0,1,2,3,...,T1,T$. The package, once again, is very general and allows users to decide on the number of time units before and after the event that must be used for statistical inference.
@@ 228,7 +234,7 @@
While the package is sufficiently generalised to undertake a wide array of inference procedures, at present it contains only two inference procedures: 1/ The bootstrap and 2/ Wilcoxon Rank test. We look at both in turn below:
\subsubsection{Bootstrap inference}
We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citep{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
+We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citep{PSS2013}. This specific approach is based on \citet{davison1986efficient}.}
\textit{inference.bootstrap} performs the bootstrap to generate distribution of $\overline{CR}$. The bootstrap generates confidence interval at 2.5 percent and 97.5 percent for the estimate.
From ajayshah at mayin.org Tue Oct 29 10:27:53 2013
From: ajayshah at mayin.org (Ajay Shah)
Date: Tue, 29 Oct 2013 14:57:53 +0530
Subject: [Eventstudiescommits] r150  pkg/vignettes
InReplyTo: <20131029092656.F200F18549F@rforge.rproject.org>
References: <20131029092656.F200F18549F@rforge.rproject.org>
MessageID:
>  at Article{,
>  author = {PatnaikShahSingh2013},
> + at Article{PSS2013,
> + author = {Patnaik, Ila and Shah, Ajay and Singh, Nirvikar},
> title = {Foreign Investors Under Stress: Evidence from India },
>  journal = {IMF Working Paper},
>  year = 2013
> + journal = {International Finance},
> + year = 2013,
> +volume = 16,
> +pages = {213244}
> }
Is wrong.

Ajay Shah
ajayshah at mayin.org
http://www.mayin.org/ajayshah
http://ajayshahblog.blogspot.com
 next part 
An HTML attachment was scrubbed...
URL:
From noreply at rforge.rproject.org Tue Oct 29 11:09:24 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Tue, 29 Oct 2013 11:09:24 +0100 (CET)
Subject: [Eventstudiescommits] r151  pkg/vignettes
MessageID: <20131029100924.7B71B185F46@rforge.rproject.org>
Author: vikram
Date: 20131029 11:09:24 +0100 (Tue, 29 Oct 2013)
New Revision: 151
Modified:
pkg/vignettes/es.bib
pkg/vignettes/eventstudies.Rnw
Log:
Added volume number to the citation
Modified: pkg/vignettes/es.bib
===================================================================
 pkg/vignettes/es.bib 20131029 09:26:56 UTC (rev 150)
+++ pkg/vignettes/es.bib 20131029 10:09:24 UTC (rev 151)
@@ 15,12 +15,13 @@
volume = 51,
pages = {207234}}
 at Article{PSS2013,
+ at Article{PatnaikShahSingh2013,
author = {Patnaik, Ila and Shah, Ajay and Singh, Nirvikar},
title = {Foreign Investors Under Stress: Evidence from India },
journal = {International Finance},
year = 2013,
volume = 16,
+number= 2,
pages = {213244}
}
Modified: pkg/vignettes/eventstudies.Rnw
===================================================================
 pkg/vignettes/eventstudies.Rnw 20131029 09:26:56 UTC (rev 150)
+++ pkg/vignettes/eventstudies.Rnw 20131029 10:09:24 UTC (rev 151)
@@ 234,7 +234,7 @@
While the package is sufficiently generalised to undertake a wide array of inference procedures, at present it contains only two inference procedures: 1/ The bootstrap and 2/ Wilcoxon Rank test. We look at both in turn below:
\subsubsection{Bootstrap inference}
We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citep{PSS2013}. This specific approach is based on \citet{davison1986efficient}.}
+We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citep{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
\textit{inference.bootstrap} performs the bootstrap to generate distribution of $\overline{CR}$. The bootstrap generates confidence interval at 2.5 percent and 97.5 percent for the estimate.
From noreply at rforge.rproject.org Tue Oct 29 11:29:55 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Tue, 29 Oct 2013 11:29:55 +0100 (CET)
Subject: [Eventstudiescommits] r152  pkg/vignettes
MessageID: <20131029102955.AD78A18600D@rforge.rproject.org>
Author: vikram
Date: 20131029 11:29:55 +0100 (Tue, 29 Oct 2013)
New Revision: 152
Added:
pkg/vignettes/ees.bib
Modified:
pkg/vignettes/ees.Rnw
pkg/vignettes/es.bib
pkg/vignettes/eventstudies.Rnw
Log:
Added bib entry for Extreme event study (ees) analysis and some minor correction
Modified: pkg/vignettes/ees.Rnw
===================================================================
 pkg/vignettes/ees.Rnw 20131029 10:09:24 UTC (rev 151)
+++ pkg/vignettes/ees.Rnw 20131029 10:29:55 UTC (rev 152)
@@ 17,10 +17,10 @@
\maketitle
\begin{abstract}
One specific application of the eventstudies package is Patnaik, Shah and Singh (2013) % TODO: Bibliography please.
+One specific application of the eventstudies package is \citet{PatnaikShahSingh2013}.
in this document. The function \texttt{ees} is a wrapper available in the package for
users to undertake similar ``extremeevents'' analysis.
We replicate the published work of Patnaik, Shah and Singh (2013) % TODO: bibtex please
+We replicate the published work of \citet{PatnaikShahSingh2013}
and explore this wrapper in detail in this document.
\end{abstract}
@@ 32,7 +32,7 @@
There are several concerns with an extremeevent analysis. Firstly, what happens when multiple tail events (``Clustered events'') occur within one another? We facilitate this analysis with summary statistics on the distribution and run length of events, quantile values to determine ``tail events'', and yearly distribution of the extremeevents. Secondly, do results change when we use ``clustered events'' and ``unclustered events'' separately, or, together in the same analysis? This wrapper also facilitates such sensitivity analysis in the study of extremeevents.
In the next few sections, we replicate one subsection of results from Patnaik, Shah and Singh (2013) % TODO: bibtex citation.
+In the next few sections, we replicate one subsection of results from \citet{PatnaikShahSingh2013}
that studies whether extreme events on the S\&P 500 affects returns on the domestic Indian stock market measured by the Nifty Index. A detailed mathematical overview of the methodology is also available in the paper.
@@ 83,6 +83,7 @@
The overall dataset looks as follows:
<<>>=
+head(output$lower.tail$data)
str(output$lower.tail$data)
@
@@ 150,5 +151,6 @@
2013}). R itself as well as these packages can be obtained from \href{http://CRAN.Rproject.org/}{CRAN}.
%\section{Acknowledgments}
+\bibliographystyle{jss} \bibliography{ees}
\end{document}
Added: pkg/vignettes/ees.bib
===================================================================
 pkg/vignettes/ees.bib (rev 0)
+++ pkg/vignettes/ees.bib 20131029 10:29:55 UTC (rev 152)
@@ 0,0 +1,10 @@
+ at Article{PatnaikShahSingh2013,
+ author = {Patnaik, Ila and Shah, Ajay and Singh, Nirvikar},
+ title = {Foreign Investors Under Stress: Evidence from India },
+ journal = {International Finance},
+ year = 2013,
+volume = 16,
+number= 2,
+pages = {213244}
+}
+
Modified: pkg/vignettes/es.bib
===================================================================
 pkg/vignettes/es.bib 20131029 10:09:24 UTC (rev 151)
+++ pkg/vignettes/es.bib 20131029 10:29:55 UTC (rev 152)
@@ 25,9 +25,9 @@
pages = {213244}
}
 at article{dvison1986efficient,
+ at article{davison1986efficient,
title={Efficient bootstrap simulation},
 author={DVISON, AC and Hinkley, DV and Schechtman, E},
+ author={Davinson, AC and Hinkley, DV and Schechtman, E},
journal={Biometrika},
volume={73},
number={3},
Modified: pkg/vignettes/eventstudies.Rnw
===================================================================
 pkg/vignettes/eventstudies.Rnw 20131029 10:09:24 UTC (rev 151)
+++ pkg/vignettes/eventstudies.Rnw 20131029 10:29:55 UTC (rev 152)
@@ 234,7 +234,7 @@
While the package is sufficiently generalised to undertake a wide array of inference procedures, at present it contains only two inference procedures: 1/ The bootstrap and 2/ Wilcoxon Rank test. We look at both in turn below:
\subsubsection{Bootstrap inference}
We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citep{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
+We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citet{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
\textit{inference.bootstrap} performs the bootstrap to generate distribution of $\overline{CR}$. The bootstrap generates confidence interval at 2.5 percent and 97.5 percent for the estimate.
From noreply at rforge.rproject.org Tue Oct 29 14:34:37 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Tue, 29 Oct 2013 14:34:37 +0100 (CET)
Subject: [Eventstudiescommits] r153  in pkg: R data inst/tests man
MessageID: <20131029133437.7C543185ADD@rforge.rproject.org>
Author: chiraganand
Date: 20131029 14:34:37 +0100 (Tue, 29 Oct 2013)
New Revision: 153
Modified:
pkg/R/ees.R
pkg/R/phys2eventtime.R
pkg/data/SplitDates.rda
pkg/inst/tests/test_eventstudy.R
pkg/inst/tests/test_inr_inference.R
pkg/man/SplitDates.Rd
pkg/man/eventstudy.Rd
pkg/man/phys2eventtime.Rd
Log:
Changed phys2eventtime eventslist argument colnames, modified all the related files, tests are passing.
Modified: pkg/R/ees.R
===================================================================
 pkg/R/ees.R 20131029 10:29:55 UTC (rev 152)
+++ pkg/R/ees.R 20131029 13:34:37 UTC (rev 153)
@@ 3,8 +3,7 @@
############################
# Identifying extreme events
############################
# libraries required
library(zoo)
+
#
# INPUT:
# 'input' : Data series for which extreme events are
@@ 811,7 +810,7 @@
# using eventstudy package
#
corecomp < function(z,dlist,seriesname,width) {
 events < data.frame(unit=rep(seriesname, length(dlist)), when=dlist)
+ events < data.frame(outcome.unit=rep(seriesname, length(dlist)), event.when=dlist)
es.results < phys2eventtime(z, events, width=0)
es.w < window(es.results$z.e, start=width, end=+width)
# Replaing NA's with zeroes
Modified: pkg/R/phys2eventtime.R
===================================================================
 pkg/R/phys2eventtime.R 20131029 10:29:55 UTC (rev 152)
+++ pkg/R/phys2eventtime.R 20131029 13:34:37 UTC (rev 153)
@@ 1,11 +1,9 @@
library(zoo)

# Upon input
# z is a zoo object containing input data. E.g. this could be all the
# prices of a bunch of stocks. The column name is the unit name.
# events is a data.frame containing 2 columns. The first column
# ("unit") is the name of the unit. The second column is the date/time
# ("when") when the event happened.
+# ("outcome.unit") is the name of the unit. The second column is the date/time
+# ("event.when") when the event happened.
# For each event, the outcome can be:
# unitmissing : a unit named in events isn't in z
# wrongspan : the event date isn't placed within the span of data for the unit
@@ 13,10 +11,10 @@
# success : all is well.
# A vector of these outcomes is returned.
phys2eventtime < function(z, events, width=10) {
 # Just in case events$unit has been sent in as a factor 
 events$unit < as.character(events$unit)
 if(is.factor(events$when)) stop("Sorry you provided a factor as an index")
 # Given a zoo timeseries z, and an event date "when",
+ # Just in case events$outcome.unit has been sent in as a factor 
+ events$outcome.unit < as.character(events$outcome.unit)
+ if(is.factor(events$event.when)) stop("Sorry you provided a factor as an index")
+ # Given a zoo timeseries z, and an event date "event.when",
# try to shift this vector into event time, where the event date
# becomes 0 and all other dates shift correspondingly.
# If this can't be done, then send back NULL with an error code.
Modified: pkg/data/SplitDates.rda
===================================================================
(Binary files differ)
Modified: pkg/inst/tests/test_eventstudy.R
===================================================================
 pkg/inst/tests/test_eventstudy.R 20131029 10:29:55 UTC (rev 152)
+++ pkg/inst/tests/test_eventstudy.R 20131029 13:34:37 UTC (rev 153)
@@ 14,12 +14,12 @@
12426, 12429, 12430, 12431, 12432), class = "Date"),
class = "zoo")
# An example events list
eventslist < data.frame(unit=c("ITC","Reliance","Infosys",
+eventslist < data.frame(outcome.unit=c("ITC","Reliance","Infosys",
"ITC","Reliance","Junk"),
 when=as.Date(c(
+ event.when=as.Date(c(
"20040102", "20040108", "20040114",
"20050115", "20040101", "20050101")))
eventslist$unit < as.character(eventslist$unit)
+eventslist$outcome.unit < as.character(eventslist$outcome.unit)
# What we expect if we don't worry about width 
rawres < structure(list(z.e = structure(c(NA, NA, NA, NA, NA, NA,
Modified: pkg/inst/tests/test_inr_inference.R
===================================================================
 pkg/inst/tests/test_inr_inference.R 20131029 10:29:55 UTC (rev 152)
+++ pkg/inst/tests/test_inr_inference.R 20131029 13:34:37 UTC (rev 153)
@@ 7,8 +7,8 @@
inr_returns < diff(log(inr))[1]
eventslist < data.frame(unit=rep("inr",10),
 when=as.Date(c(
+eventslist < data.frame(outcome.unit=rep("inr",10),
+ event.when=as.Date(c(
"20100420","20100702","20100727",
"20100916","20101102","20110125",
"20110317","20110503","20110616",
Modified: pkg/man/SplitDates.Rd
===================================================================
 pkg/man/SplitDates.Rd 20131029 10:29:55 UTC (rev 152)
+++ pkg/man/SplitDates.Rd 20131029 13:34:37 UTC (rev 153)
@@ 7,7 +7,7 @@
\title{A set of events to perform eventstudy analysis.}
\description{
The data contains stock split event dates for the index constituents of the Bombay Stock Exchange index (SENSEX). The data format follows the required format in the function \code{phys2eventtime}, with two columns 'unit' (firm name) and 'when' (stock split date).}
+The data contains stock split event dates for the index constituents of the Bombay Stock Exchange index (SENSEX). The data format follows the required format in the function \code{phys2eventtime}, with two columns 'outcome.unit' (firm name) and 'event.when' (stock split date).}
\usage{data(SplitDates)}
@@ 16,4 +16,4 @@
\examples{
data(SplitDates)
}
\keyword{dataset}
\ No newline at end of file
+\keyword{dataset}
Modified: pkg/man/eventstudy.Rd
===================================================================
 pkg/man/eventstudy.Rd 20131029 10:29:55 UTC (rev 152)
+++ pkg/man/eventstudy.Rd 20131029 13:34:37 UTC (rev 153)
@@ 23,7 +23,7 @@
\arguments{
\item{firm.returns}{Data on which event study is to be performed}
 \item{eventList}{A data frame with event dates. It has two columns 'unit' and 'when'. The first column 'unit' consists of column names of the event stock and 'when' is the respective event date}
+ \item{eventList}{A data frame with event dates. It has two columns 'outcome.unit' and 'event.when'. The first column 'outcome.unit' consists of column names of the event stock and 'event.when' is the respective event date}
\item{width}{It studies the performance of observations before and after the event}
\item{type}{This argument gives an option to use different market model adjustment like "marketResidual", "excessReturn", "AMM" and "None"}
\item{to.remap}{If TRUE then remap the event frame is done}
Modified: pkg/man/phys2eventtime.Rd
===================================================================
 pkg/man/phys2eventtime.Rd 20131029 10:29:55 UTC (rev 152)
+++ pkg/man/phys2eventtime.Rd 20131029 13:34:37 UTC (rev 153)
@@ 16,9 +16,9 @@
\arguments{
\item{z}{Time series data for which event frame is to be generated.}
 \item{events}{It is a data frame with two columns: unit and when. unit
+ \item{events}{It is a data frame with two columns: outcome.unit and event.when. outcome.unit
has column name of which response is to measured on the event date,
 while when has the event date. See details.}
+ while event.when has the event date. See details.}
\item{width}{Width corresponds to the number of days on each side of the event date.For a given width, if there is any NA in the event window then the last observation is carried forward.}
From noreply at rforge.rproject.org Wed Oct 30 04:22:47 2013
From: noreply at rforge.rproject.org (noreply at rforge.rproject.org)
Date: Wed, 30 Oct 2013 04:22:47 +0100 (CET)
Subject: [Eventstudiescommits] r154  pkg/vignettes
MessageID: <20131030032247.9D787184F7B@rforge.rproject.org>
Author: vikram
Date: 20131030 04:22:45 +0100 (Wed, 30 Oct 2013)
New Revision: 154
Modified:
pkg/vignettes/eventstudies.Rnw
Log:
Minor correction in spelling
Modified: pkg/vignettes/eventstudies.Rnw
===================================================================
 pkg/vignettes/eventstudies.Rnw 20131029 13:34:37 UTC (rev 153)
+++ pkg/vignettes/eventstudies.Rnw 20131030 03:22:45 UTC (rev 154)
@@ 234,7 +234,7 @@
While the package is sufficiently generalised to undertake a wide array of inference procedures, at present it contains only two inference procedures: 1/ The bootstrap and 2/ Wilcoxon Rank test. We look at both in turn below:
\subsubsection{Bootstrap inference}
We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detaild explanation of the methodology is presented in \citet{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
+We hold an event time object that contains several crosssectional observations for a single definition of an event: The stock split. At each event time, i.e., $T,(T1),...,0,...,(T1),T$, we hold observations for 30 stocks. At this point, without any assumption on the distribution of these cross sectional returns, we can generate the sampling distribution for the location estimator (mean in this case) using nonparametric inference procedures. The bootstrap is our primary function in the suite of inference procedures under construction.\footnote{Detailed explanation of the methodology is presented in \citet{PatnaikShahSingh2013}. This specific approach is based on \citet{davison1986efficient}.}
\textit{inference.bootstrap} performs the bootstrap to generate distribution of $\overline{CR}$. The bootstrap generates confidence interval at 2.5 percent and 97.5 percent for the estimate.