[Returnanalytics-commits] r2956 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: R man vignettes
noreply at r-forge.r-project.org
noreply at r-forge.r-project.org
Sat Aug 31 23:27:28 CEST 2013
Author: shubhanm
Date: 2013-08-31 23:27:27 +0200 (Sat, 31 Aug 2013)
New Revision: 2956
Added:
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.pdf
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.pdf
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.pdf
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.pdf
Modified:
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd
Log:
./ Clean vignettes
Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 20:52:41 UTC (rev 2955)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R 2013-08-31 21:27:27 UTC (rev 2956)
@@ -18,7 +18,7 @@
#' @keywords Maximum Loss Simulated Drawdown
#' @examples
#' library(PerformanceAnalytics)
-#' AcarSim()
+#' AcarSim(R)
#' @rdname AcarSim
#' @export
AcarSim <-
@@ -28,7 +28,7 @@
data(edhec)
- R = checkData(edhec, method="xts")
+ R = checkData(R, method="xts")
# Get dimensions and labels
# simulated parameters using edhec data
mu=mean(Return.annualized(R))
@@ -40,7 +40,7 @@
T= 36
j=1
dt=1/T
-nsim=1;
+nsim=30;
thres=4;
r=matrix(0,nsim,T+1)
monthly = 0
Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R 2013-08-31 20:52:41 UTC (rev 2955)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/chart.AcarSim.R 2013-08-31 21:27:27 UTC (rev 2956)
@@ -39,7 +39,7 @@
T= 36
j=1
dt=1/T
- nsim=6000;
+ nsim=6;
thres=4;
r=matrix(0,nsim,T+1)
monthly = 0
Modified: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-31 20:52:41 UTC (rev 2955)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/AcarSim.Rd 2013-08-31 21:27:27 UTC (rev 2956)
@@ -31,7 +31,7 @@
}
\examples{
library(PerformanceAnalytics)
-AcarSim()
+AcarSim(R)
}
\author{
Shubhankit Mohan
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw (rev 0)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.Rnw 2013-08-31 21:27:27 UTC (rev 2956)
@@ -0,0 +1,85 @@
+%% no need for \DeclareGraphicsExtensions{.pdf,.eps}
+
+\documentclass[12pt,letterpaper,english]{article}
+\usepackage{times}
+\usepackage[T1]{fontenc}
+\IfFileExists{url.sty}{\usepackage{url}}
+ {\newcommand{\url}{\texttt}}
+
+\usepackage{babel}
+%\usepackage{noweb}
+\usepackage{Rd}
+
+\usepackage{Sweave}
+\SweaveOpts{engine=R,eps=FALSE}
+%\VignetteIndexEntry{Performance Attribution from Bacon}
+%\VignetteDepends{PerformanceAnalytics}
+%\VignetteKeywords{returns, performance, risk, benchmark, portfolio}
+%\VignettePackage{PerformanceAnalytics}
+
+%\documentclass[a4paper]{article}
+%\usepackage[noae]{Sweave}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry}
+%\usepackage{graphicx}
+%\usepackage{graphicx, verbatim}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage{graphicx}
+
+\title{Chekhlov Conditional Drawdown at Risk}
+\author{R Project for Statistical Computing}
+
+\begin{document}
+\SweaveOpts{concordance=TRUE}
+
+\maketitle
+
+
+\begin{abstract}
+A new one-parameter family of risk measures called Conditional Drawdown (CDD) has
+been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of $\hat{\alpha}$ the tolerance parameter, in the case of a single sample path, drawdown functional is defined as the mean of the worst (1 \(-\) $\hat{\alpha}$)100\% drawdowns. The CDD measure generalizes the notion of the drawdown functional to a multi-scenario case and can be considered as a generalization of deviation measure to a dynamic case. The CDD measure includes the Maximal Drawdown and Average Drawdown as its limiting cases.
+\end{abstract}
+
+<<echo=FALSE >>=
+library(PerformanceAnalytics)
+data(edhec)
+@
+
+<<echo=FALSE,eval=TRUE,results=verbatim >>=
+source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/CDrawdown.R")
+@
+
+\section{Background}
+
+The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD
+The model develops concept of drawdown measure by generalizing the notion
+of the CDD to the case of several sample paths for portfolio uncompounded rate
+of return.
+
+
+\section{Methodology}
+For a given value of sequence ${\xi_k}$ where ${\xi}$ is a time series unit drawdown vector.The CV at R is formally defined as :
+\begin{equation}
+CV at R_{\alpha}(\xi)=\frac{\pi_{\xi}(\zeta(\alpha))-\alpha}{1-\alpha}\zeta(\alpha) + \frac{ \sum_{\xi_k=1}^{} \xi_k}{(1-\alpha)N}
+\end{equation}
+
+Note that the first term in the right-hand side of equation appears because of inequality greater than equal to $\hat{\alpha}$. If (1 \(-\) $\hat{\alpha}$) \* 100\% of the worst drawdowns can be counted precisely, then and the first term in the right-hand side of equation disappears. Equation follows from the framework of the CVaR methodology
+
+
+\section{Usage}
+
+In this example we use edhec database, to compute true Hedge Fund Returns.
+
+<<>>=
+library(PerformanceAnalytics)
+data(edhec)
+CDrawdown(edhec)
+@
+
+
+
+\end{document}
\ No newline at end of file
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.pdf
===================================================================
(Binary files differ)
Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ConditionalDrawdown.pdf
___________________________________________________________________
Added: svn:mime-type
+ application/octet-stream
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw (rev 0)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.Rnw 2013-08-31 21:27:27 UTC (rev 2956)
@@ -0,0 +1,135 @@
+%% no need for \DeclareGraphicsExtensions{.pdf,.eps}
+
+\documentclass[12pt,letterpaper,english]{article}
+\usepackage{times}
+\usepackage[T1]{fontenc}
+\IfFileExists{url.sty}{\usepackage{url}}
+ {\newcommand{\url}{\texttt}}
+
+\usepackage{babel}
+%\usepackage{noweb}
+\usepackage{Rd}
+
+\usepackage{Sweave}
+\SweaveOpts{engine=R,eps=FALSE}
+%\VignetteIndexEntry{Performance Attribution from Bacon}
+%\VignetteDepends{PerformanceAnalytics}
+%\VignetteKeywords{returns, performance, risk, benchmark, portfolio}
+%\VignettePackage{PerformanceAnalytics}
+
+%\documentclass[a4paper]{article}
+%\usepackage[noae]{Sweave}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry}
+%\usepackage{graphicx}
+%\usepackage{graphicx, verbatim}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage{graphicx}
+
+\title{Gemantsky Lo Makarov Return Model}
+\author{R Project for Statistical Computing}
+
+\begin{document}
+\SweaveOpts{concordance=TRUE}
+
+\maketitle
+
+
+\begin{abstract}
+The returns to hedge funds and other alternative investments are often highly serially correlated. In this paper, we explore several sources of such serial correlation and show that the most likely explanation is illiquidity exposure and smoothed returns. We propose an econometric model of return smoothingand develop estimators for the smoothing profile as well as a smoothing-adjusted obtained Sharpe ratio.\end{abstract}
+
+<<echo=FALSE >>=
+library(PerformanceAnalytics)
+data(edhec)
+@
+
+<<echo=FALSE,eval=TRUE,results=verbatim >>=
+source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/Return.GLM.R')
+source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/na.skip.R')
+@
+
+\section{Methodology}
+Given a sample of historical returns \((R_1,R_2, . . .,R_T)\),the method assumes the fund manager smooths returns in the following manner:
+
+To quantify the impact of all of these possible sources of serial correlation, denote by \(R_t\),the true economic return of a hedge fund in period t; and let \(R_t\) satisfy the following linear single-factor model:
+
+\begin{equation}
+ R_t = \\ {\mu} + {\beta}{{\delta}}_t+ \xi_t
+\end{equation}
+
+Where $\xi_t, \sim N(0,1)$
+and Var[\(R_t\)] = $\sigma$\ \(^2\)
+
+True returns represent the flow of information that would determine the equilibrium value of the fund's securities in a frictionless market. However, true economic returns are not observed. Instead, \(R_t^0\) denotes the reported or observed return in period t; and let
+%$Z = \sin(X)$. $\sqrt{X}$.
+
+%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$
+%e^{2 \mu} = 1
+%\begin{equation}
+%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\
+%\end{equation}
+\begin{equation}
+ R_t^0 = \theta _0R_{t} + \theta _1R_{t-1}+\theta _2R_{t-2} + \cdots + \theta _kR_{t-k}\\
+\end{equation}
+\begin{equation}
+\theta _j \epsilon [0,1] where : j = 0,1, \cdots , k \\
+\end{equation}
+
+and
+%\left(\mu \right) = \sum_{t=1}^{T} \(Ri)/T\ \\
+\begin{equation}
+\theta _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1 \\
+\end{equation}
+
+which is a weighted average of the fund's true returns over the most recent k + 1
+periods, including the current period.
+\section{Smoothing Profile Estimates}
+
+Using the methods outlined above , the paper estimates the smoothing model
+using maximumlikelihood procedure-programmed in Matlab using the Optimization Toolbox andreplicated in Stata usingits MA(k) estimation routine.Using Time seseries analysis and computational finance("tseries") library , we fit an it an ARMA model to a univariate time series by conditional least squares. For exact maximum likelihood estimation,arima0 from package stats can be used.
+
+\section{Usage}
+
+In this example we use edhec database, to compute true Hedge Fund Returns.
+
+<<Graph10,echo=T,fig=T>>=
+library(PerformanceAnalytics)
+data(edhec)
+Returns = Return.GLM(edhec[,1])
+skewness(edhec[,1])
+skewness(Returns)
+# Right Shift of Returns Ditribution for a negative skewed distribution
+kurtosis(edhec[,1])
+kurtosis(Returns)
+# Reduction in "peakedness" around the mean
+layout(rbind(c(1, 2), c(3, 4)))
+ chart.Histogram(Returns, main = "Plain", methods = NULL)
+ chart.Histogram(Returns, main = "Density", breaks = 40,
+ methods = c("add.density", "add.normal"))
+ chart.Histogram(Returns, main = "Skew and Kurt",
+ methods = c("add.centered", "add.rug"))
+chart.Histogram(Returns, main = "Risk Measures",
+ methods = c("add.risk"))
+@
+
+The above figure shows the behaviour of the distribution tending to a normal IID distribution.For comparitive purpose, one can observe the change in the charateristics of return as compared to the orignal.
+
+<<Graph1,echo=T,fig=T>>=
+library(PerformanceAnalytics)
+data(edhec)
+Returns = Return.GLM(edhec[,1])
+layout(rbind(c(1, 2), c(3, 4)))
+ chart.Histogram(edhec[,1], main = "Plain", methods = NULL)
+ chart.Histogram(edhec[,1], main = "Density", breaks = 40,
+ methods = c("add.density", "add.normal"))
+ chart.Histogram(edhec[,1], main = "Skew and Kurt",
+ methods = c("add.centered", "add.rug"))
+chart.Histogram(edhec[,1], main = "Risk Measures",
+ methods = c("add.risk"))
+@
+
+\end{document}
\ No newline at end of file
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.pdf
===================================================================
(Binary files differ)
Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMReturn.pdf
___________________________________________________________________
Added: svn:mime-type
+ application/octet-stream
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw (rev 0)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.Rnw 2013-08-31 21:27:27 UTC (rev 2956)
@@ -0,0 +1,107 @@
+%% no need for \DeclareGraphicsExtensions{.pdf,.eps}
+
+\documentclass[12pt,letterpaper,english]{article}
+\usepackage{times}
+\usepackage[T1]{fontenc}
+\IfFileExists{url.sty}{\usepackage{url}}
+ {\newcommand{\url}{\texttt}}
+
+\usepackage{babel}
+%\usepackage{noweb}
+\usepackage{Rd}
+
+\usepackage{Sweave}
+\SweaveOpts{engine=R,eps=FALSE}
+%\VignetteIndexEntry{Performance Attribution from Bacon}
+%\VignetteDepends{PerformanceAnalytics}
+%\VignetteKeywords{returns, performance, risk, benchmark, portfolio}
+%\VignettePackage{PerformanceAnalytics}
+
+%\documentclass[a4paper]{article}
+%\usepackage[noae]{Sweave}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry}
+%\usepackage{graphicx}
+%\usepackage{graphicx, verbatim}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage{graphicx}
+
+\title{GLM Smoothing Index}
+\author{R Project for Statistical Computing}
+
+\begin{document}
+\SweaveOpts{concordance=TRUE}
+
+\maketitle
+
+
+\begin{abstract}
+The returns to hedge funds and other alternative investments are often highly serially correlated.Gemanstsy,Lo and Markov propose an econometric model of return smoothingand develop estimators for the smoothing profile.The magnitude of impact is measured by the smoothing index, which is a measure of concentration of weight in lagged terms.
+\end{abstract}
+
+<<echo=FALSE >>=
+library(PerformanceAnalytics)
+data(edhec)
+@
+
+<<echo=FALSE>>=
+source('C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R')
+@
+
+\section{Background}
+To quantify the impact of all of these possible sources of serial correlation, denote by \(R_t\),the true economic return of a hedge fund in period t; and let \(R_t\) satisfy the following linear single factor model:
+
+\begin{equation}
+ R_t = \\ {\mu} + {\beta}{{\delta}}_t+ \xi_t
+\end{equation}
+
+Where $\xi_t, \sim N(0,1)$
+and Var[\(R_t\)] = $\sigma$\ \(^2\)
+
+True returns represent the flow of information that would determine the equilibrium value of the fund's securities in a frictionless market. However, true economic returns are not observed. Instead, \(R_t^0\) denotes the reported or observed return in period t; and let
+%$Z = \sin(X)$. $\sqrt{X}$.
+
+%$\hat{\mu}$ = $\displaystyle\frac{22}{7}$
+%e^{2 \mu} = 1
+%\begin{equation}
+%\left(\sum_{t=1}^{T} R_t/T\right) = \hat{\mu} \\
+%\end{equation}
+\begin{equation}
+ R_t^0 = \theta _0R_{t} + \theta _1R_{t-1}+\theta _2R_{t-2} + \cdots + \theta _kR_{t-k}\\
+\end{equation}
+\begin{equation}
+\theta _j \epsilon [0,1] where : j = 0,1, \cdots , k \\
+\end{equation}
+
+and
+%\left(\mu \right) = \sum_{t=1}^{T} \(Ri)/T\ \\
+\begin{equation}
+\theta _1 + \theta _2 + \theta _3 \cdots + \theta _k = 1 \\
+\end{equation}
+
+which is a weighted average of the fund's true returns over the most recent k + 1
+periods, including the current period.
+
+\section{Smoothing Index}
+A useful summary statistic for measuringthe concentration of weights is :
+\begin{equation}
+\xi = \sum_{j=0}^{k} \theta _j^2 \\
+\end{equation}
+
+This measure is well known in the industrial organization literature as the Herfindahl index, a measure of the concentration of firms in a given industry where $\theta$\(_j\) represents the market share of firm j. Becaus $\xi_t$\ is confined to the unit interval, and is minimized when all the $\theta$\(_j\) 's are identical, which implies a value of 1/k+1 for $\xi_i$\ ; and is maximized when one coefficient is 1 and the rest are 0. In the context of smoothed returns, a lower value of implies more smoothing, and the upper bound of 1 implies no smoothing, hence we shall refer to $\theta$\(_j\) as a ''\textbf{smoothingindex}''.
+
+\section{Usage}
+
+In this example we use edhec database, to compute Smoothing Index for Hedge Fund Returns.
+<<>>=
+library(PerformanceAnalytics)
+data(edhec)
+GLMSmoothIndex(edhec)
+@
+
+
+\end{document}
\ No newline at end of file
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.pdf
===================================================================
(Binary files differ)
Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/GLMSmoothIndex.pdf
___________________________________________________________________
Added: svn:mime-type
+ application/octet-stream
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw (rev 0)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.Rnw 2013-08-31 21:27:27 UTC (rev 2956)
@@ -0,0 +1,87 @@
+%% no need for \DeclareGraphicsExtensions{.pdf,.eps}
+
+\documentclass[12pt,letterpaper,english]{article}
+\usepackage{times}
+\usepackage[T1]{fontenc}
+\IfFileExists{url.sty}{\usepackage{url}}
+ {\newcommand{\url}{\texttt}}
+
+\usepackage{babel}
+%\usepackage{noweb}
+\usepackage{Rd}
+
+\usepackage{Sweave}
+\SweaveOpts{engine=R,eps=FALSE}
+%\VignetteIndexEntry{Performance Attribution from Bacon}
+%\VignetteDepends{PerformanceAnalytics}
+%\VignetteKeywords{returns, performance, risk, benchmark, portfolio}
+%\VignettePackage{PerformanceAnalytics}
+
+%\documentclass[a4paper]{article}
+%\usepackage[noae]{Sweave}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry}
+%\usepackage{graphicx}
+%\usepackage{graphicx, verbatim}
+%\usepackage{ucs}
+%\usepackage[utf8x]{inputenc}
+%\usepackage{amsmath, amsthm, latexsym}
+%\usepackage{graphicx}
+
+\title{Maximum Loss and Maximum Drawdown in Financial Markets}
+\author{R Project for Statistical Computing}
+
+\begin{document}
+\SweaveOpts{concordance=TRUE}
+
+\maketitle
+
+
+\begin{abstract}
+The main concern of this paper is the study of alternative risk measures: namely maximum loss and
+maximum drawdown. Both statistics have received little attention from academics despite their extensive use by proprietary traders and derivative fund managers.
+Firstly, this paper recalls from previously published research the expected maximum loss under the normal random walk with drift assumption. In that case, we see that exact analytical formulae can be established. The expected maximum loss can be derived as a function of the mean and standard deviation of the asset. For the maximum drawdown, exact formulae seems more difficult to establish.
+Therefore Monte-Carlo simulations have to be used.
+\end{abstract}
+
+<<echo=FALSE >>=
+library(PerformanceAnalytics)
+data(edhec)
+@
+
+<<echo=FALSE,eval=TRUE,results=verbatim >>=
+source("C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R")
+@
+
+\section{Background}
+
+The model is focused on concept of drawdown measure which is in possession of all properties of a deviation measure,generalization of deviation measures to a dynamic case.Concept of risk profiling - Mixed Conditional Drawdown (generalization of CDD).Optimization techniques for CDD computation - reduction to linear programming (LP) problem. Portfolio optimization with constraint on Mixed CDD
+The model develops concept of drawdown measure by generalizing the notion
+of the CDD to the case of several sample paths for portfolio uncompounded rate
+of return.
+
+
+\section{Maximum Drawdown}
+Unfortunately, there is no analytical formulae to establish the maximum drawdown properties under the random walk assumption. We should note first that due to its definition, the maximum drawdown divided by volatility is an only function of the ratio mean divided by volatility.
+\begin{equation}
+MD / \sigma = Min \frac{ \sum_{j=1}^{t} X_{j}}{\sigma} = F(\frac{\mu}{\sigma}) \\
+\end{equation}
+
+Such a ratio is useful in that this is a complementary statistic to the return divided by volatility ratio. To get some insight on the relationships between maximum drawdown per unit of volatility and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. We have simulated cash flows over a period of 36 monthly returns and measured maximum drawdown for varied levels of annualised return divided by volatility varying from minus two to two by step of 0.1. The process has been repeated six thousand times.
+
+\section{Usage}
+Figure below illustrates the average maximum drawdown as well as the quantiles 85\%, 90\%, 95\%, 99\%.For instance, an investment exhibiting an annualised return/volatility equal to -2 should experience on average a maximum drawdown equal to six times the annualised volatility.
+
+Other observations are that:maximum drawdown is a positive function of the return /volatility ratio ,confidence interval widens as the return/volatility ratio decreases.This means that as the return/volatility increases not only the magnitude of drawdown decreases but the confidence interval as well. In others words losses are both smaller and more predictable.
+
+<<fig=TRUE>>=
+library(PerformanceAnalytics)
+data(edhec)
+AcarSim(edhec)
+@
+
+
+
+\end{document}
\ No newline at end of file
Added: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.pdf
===================================================================
(Binary files differ)
Property changes on: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/vignettes/ShaneAcarMaxLoss.pdf
___________________________________________________________________
Added: svn:mime-type
+ application/octet-stream
More information about the Returnanalytics-commits
mailing list