[Returnanalytics-commits] r3150 - in pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm: . .Rproj.user/E5D7D248/sdb/per/t tests tests/Examples
noreply at r-forge.r-project.org
noreply at r-forge.r-project.org
Fri Sep 20 20:20:05 CEST 2013
Author: shubhanm
Date: 2013-09-20 20:20:05 +0200 (Fri, 20 Sep 2013)
New Revision: 3150
Added:
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/tests/
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/tests/Examples/
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/tests/Examples/noniid.sm-Ex.R
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/tests/Examples/noniid.sm-Ex.Rout
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/tests/Examples/noniid.sm-Ex.pdf
Removed:
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/32D790F7
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/445E439C
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/44B07808
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/58E583C6
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/7D095D73
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/934ACCDE
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/C4A4A866
pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/F08D801A
Log:
Adding the tests/Examples for the package
Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/32D790F7
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/32D790F7 2013-09-20 17:36:42 UTC (rev 3149)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/32D790F7 2013-09-20 18:20:05 UTC (rev 3150)
@@ -1,15 +0,0 @@
-{
- "contents" : "#'@title Fitting Generalized Linear Models with HC and HAC Covariance Matrix Estimators\n#'@description\n#' lm is used to fit generalized linear models, specified by giving a symbolic description of the linear predictor and a description of the error distribution.\n#' @details\n#' see \\code{\\link{lm}}.\n#' @param formula \n#'an object of class \"formula\" (or one that can be coerced to that class): a symbolic description of the model to be fitted. The details of model specification are given under ‘Details’.\n#'\n#'\n#'@param data\t\n#'an optional data frame, list or environment (or object coercible by as.data.frame to a data frame) containing the variables in the model. If not found in data, the variables are taken from environment(formula), typically the environment from which lm is called.\n#'\n#'@param vcov HC-HAC covariance estimation\n#'@param weights\t\n#'an optional vector of weights to be used in the fitting process. Should be NULL or a numeric vector. If non-NULL, weighted least squares is used with weights weights (that is, minimizing sum(w*e^2)); otherwise ordinary least squares is used. See also ‘Details’,\n#'\n#'\n#'@param subset \n#'an optional vector specifying a subset of observations to be used in the fitting process.\n#'@param na.action\t\n#'a function which indicates what should happen when the data contain NAs. The default is set by the na.action setting of options, and is na.fail if that is unset. The ‘factory-fresh’ default is na.omit. Another possible value is NULL, no action. Value na.exclude can be useful.\n#'\n#'@param method\t\n#'the method to be used; for fitting, currently only method = \"qr\" is supported; method = \"model.frame\" returns the model frame (the same as with model = TRUE, see below).\n#'\n#'@param model logicals. If TRUE the corresponding components of the fit (the model frame, the model matrix, the response, the QR decomposition) are returned.\t\n#'@param x logicals. If TRUE the corresponding components of the fit (the model frame, the model matrix, the response, the QR decomposition) are returned.\n#'@param y logicals. If TRUE the corresponding components of the fit (the model frame, the model matrix, the response, the QR decomposition) are returned.\n#'@param qr logicals. If TRUE the corresponding components of the fit (the model frame, the model matrix, the response, the QR decomposition) are returned.\n#'@param singular.ok\t\n#'logical. If FALSE (the default in S but not in R) a singular fit is an error.\n#'\n#'@param contrasts\t\n#'an optional list. See the contrasts.arg of model.matrix.default.\n#'\n#'@param offset\t\n#'this can be used to specify an a priori known component to be included in the linear predictor during fitting. This should be NULL or a numeric vector of length equal to the number of cases. One or more offset terms can be included in the formula instead or as well, and if more than one are specified their sum is used. See model.offset.\n#'\n#'@param \\dots\t\n#'additional arguments to be passed to the low level regression fitting functions (see below).\n#' @author The original R implementation of glm was written by Simon Davies working for Ross Ihaka at the University of Auckland, but has since been extensively re-written by members of the R Core team.\n#' The design was inspired by the S function of the same name described in Hastie & Pregibon (1992).\n#' @keywords HC HAC covariance estimation regression fitting model\n#' @rdname lmi\n#' @export\nlmi <- function (formula, data,vcov = NULL, subset, weights, na.action, method = \"qr\", \n model = TRUE, x = FALSE, y = FALSE, qr = TRUE, singular.ok = TRUE, \n contrasts = NULL, offset, ...) \n{\n ret.x <- x\n ret.y <- y\n cl <- match.call()\n mf <- match.call(expand.dots = FALSE)\n m <- match(c(\"formula\", \"data\", \"subset\", \"weights\", \"na.action\", \n \"offset\"), names(mf), 0L)\n mf <- mf[c(1L, m)]\n mf$drop.unused.levels <- TRUE\n mf[[1L]] <- as.name(\"model.frame\")\n mf <- eval(mf, parent.frame())\n if (method == \"model.frame\") \n return(mf)\n else if (method != \"qr\") \n warning(gettextf(\"method = '%s' is not supported. Using 'qr'\", \n method), domain = NA)\n mt <- attr(mf, \"terms\")\n y <- model.response(mf, \"numeric\")\n w <- as.vector(model.weights(mf))\n if (!is.null(w) && !is.numeric(w)) \n stop(\"'weights' must be a numeric vector\")\n offset <- as.vector(model.offset(mf))\n if (!is.null(offset)) {\n if (length(offset) != NROW(y)) \n stop(gettextf(\"number of offsets is %d, should equal %d (number of observations)\", \n length(offset), NROW(y)), domain = NA)\n }\n if (is.empty.model(mt)) {\n x <- NULL\n z <- list(coefficients = if (is.matrix(y)) matrix(, 0, \n 3) else numeric(), residuals = y, fitted.values = 0 * \n y, weights = w, rank = 0L, df.residual = if (!is.null(w)) sum(w != \n 0) else if (is.matrix(y)) nrow(y) else length(y))\n if (!is.null(offset)) {\n z$fitted.values <- offset\n z$residuals <- y - offset\n }\n }\n else {\n x <- model.matrix(mt, mf, contrasts)\n z <- if (is.null(w)) \n lm.fit(x, y, offset = offset, singular.ok = singular.ok, \n ...)\n else lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, \n ...)\n }\n class(z) <- c(if (is.matrix(y)) \"mlm\", \"lm\")\n z$na.action <- attr(mf, \"na.action\")\n z$offset <- offset\n z$contrasts <- attr(x, \"contrasts\")\n z$xlevels <- .getXlevels(mt, mf)\n z$call <- cl\n z$terms <- mt\n if (model) \n z$model <- mf\n if (ret.x) \n z$x <- x\n if (ret.y) \n z$y <- y\n if (!qr) \n z$qr <- NULL\n #z\n if(is.null(vcov)) {\n se <- vcov(z)\n } else {\n if (is.function(vcov))\n se <- vcov(z)\n else\n se <- vcov\n }\n z = list(z,vHaC = se) \n z\n}\n",
- "created" : 1379107697415.000,
- "dirty" : false,
- "encoding" : "UTF-8",
- "folds" : "",
- "hash" : "2819201039",
- "id" : "32D790F7",
- "lastKnownWriteTime" : 1379110731,
- "path" : "C:/Users/shubhankit/Desktop/1 week/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/lmi.R",
- "properties" : {
- },
- "source_on_save" : true,
- "type" : "r_source"
-}
\ No newline at end of file
Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/445E439C
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/445E439C 2013-09-20 17:36:42 UTC (rev 3149)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/445E439C 2013-09-20 18:20:05 UTC (rev 3150)
@@ -1,17 +0,0 @@
-{
- "contents" : "%% no need for \\DeclareGraphicsExtensions{.pdf,.eps}\n\n\\documentclass[12pt,letterpaper,english]{article}\n\\usepackage{times}\n\\usepackage[T1]{fontenc}\n\\IfFileExists{url.sty}{\\usepackage{url}}\n {\\newcommand{\\url}{\\texttt}}\n\n\\usepackage{babel}\n%\\usepackage{noweb}\n\\usepackage{Rd}\n\n\\usepackage{Sweave}\n\\SweaveOpts{engine=R,eps=FALSE}\n%\\VignetteIndexEntry{Performance Attribution from Bacon}\n%\\VignetteDepends{PerformanceAnalytics}\n%\\VignetteKeywords{returns, performance, risk, benchmark, portfolio}\n%\\VignettePackage{PerformanceAnalytics}\n\n%\\documentclass[a4paper]{article}\n%\\usepackage[noae]{Sweave}\n%\\usepackage{ucs}\n%\\usepackage[utf8x]{inputenc}\n%\\usepackage{amsmath, amsthm, latexsym}\n%\\usepackage[top=3cm, bottom=3cm, left=2.5cm]{geometry}\n%\\usepackage{graphicx}\n%\\usepackage{graphicx, verbatim}\n%\\usepackage{ucs}\n%\\usepackage[utf8x]{inputenc}\n%\\usepackage{amsmath, amsthm, latexsym}\n%\\usepackage{graphicx}\n\n\\title{Commodity Index Fund Performance Analysis}\n\\author{Shubhankit Mohan}\n\n\\begin{document}\n\\SweaveOpts{concordance=TRUE}\n\n\\maketitle\n\n\n\\begin{abstract}\nThe fact that many hedge fund returns exhibit extraordinary levels of serial correlation is now well-known and generally accepted as fact. The effect of this autocorrelation on investment returns diminishes the apparent risk of such asset classes as the true returns/risk is easily \\textbf{camouflaged} within a haze of liquidity, stale prices, averaged price quotes and smoothed return reporting. We highlight the effect \\emph{autocorrelation} and \\emph{drawdown} has on performance analysis by investigating the results of functions developed during the Google Summer of Code 2013 on \\textbf{commodity based index} .\n\\end{abstract}\n\n\\tableofcontents\n\n<<echo=FALSE >>=\nlibrary(PerformanceAnalytics)\nlibrary(noniid.sm)\ndata(edhec)\n@\n\n\n\\section{Background}\nThe investigated fund index that tracks a basket of \\emph{commodities} to measure their performance.The value of these indexes fluctuates based on their underlying commodities, and this value depends on the \\emph{component}, \\emph{methodology} and \\emph{style} to cover commodity markets .\n\nA brief overview of the indicies invested in our report are : \n \\begin{itemize}\n \\item\n \\textbf{DJUBS Commodity index} : is a broadly diversified index that allows investors to track commodity futures through a single, simple measure. As the index has grown in popularity since its introduction in 1998, additional versions and a full complement of sub-indices have been introduced. Together, the family offers investors a comprehensive set of tools for measuring the commodity markets.\n \\item\n \\textbf{Morningstar CLS index} : is a simple rules-based trend following index operated in commodities\n \\item\n \\textbf{Newedge CTI} : includes funds that utilize a variety of investment strategies to profit from price moves in commodity markets.\nManagers typically use either (i) a trading orientated approach,involving the trading of physical commodity products and/or of commodity\nderivative instruments in either directional or relative value strategies; Or (ii) Long short equity strategies focused on commodity related stocks.\n \\end{itemize}\n%Let $X \\sim N(0,1)$ and $Y \\sim \\textrm{Exponential}(\\mu)$. Let\n%$Z = \\sin(X)$. $\\sqrt{X}$.\n \n%$\\hat{\\mu}$ = $\\displaystyle\\frac{22}{7}$\n%e^{2 \\mu} = 1\n%\\begin{equation}\n%\\left(\\sum_{t=1}^{T} R_t/T\\right) = \\hat{\\mu} \\\\\n%\\end{equation}\n\n\\section{Performance Summary Chart}\n\nGiven a series of historical returns \\((R_1,R_2, . . .,R_T)\\) from \\textbf{January-2001} to \\textbf{December-2009}, create a wealth index chart, bars for per-period performance, and underwater chart for drawdown of the 3 funds.\n\n<<echo=F,fig=T>>=\ndata <- read.csv(\"C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/data/HAM3-data.csv\") \ndates <- data$X\nvalues <- data[,-1] # convert percentage to return\nCOM <- as.xts(values, order.by=as.Date(dates))\nCOM.09<-COM[,9:11]\ncharts.PerformanceSummary(COM.09[1:108,],colorset = rich6equal, lwd = 2, ylog = TRUE)\n@\n\nThe above figure shows the behavior of the respective fund performance, which is \\textbf{upward} trending for all the funds till the period of \\textbf{\"January-2008\"}.For comparative purpose, one can observe the distinct \\textbf{drawdown} of \\textbf{Newedge CTI} since the latter period.\n\n\\section{Statistical and Drawdown Analysis}\n\nA summary of Fund Return series characteristics show that \\textbf{DJUBS.Commodity} performs worse relatively to it's peers.The most distinct characteristic being highest : \\textbf{Variance, Stdev, SE Mean} and well as negative \\textbf{Skewness} .The table shows clearly, that the returns of all the hedge fund indices are non-normal.Presence of \\emph{negative} skewness is a major area of concern for the downside risk potential and expected maximum loss.\n\n<<echo=F,fig=F>>=\ntable.Stats(COM.09, ci = 0.95, digits = 4)\n@\n\n\nThe results are consistent with Drawdown Analysis in which \\textbf{DJUBS.Commodity} performs worse relatively to it's peers.\n\n<<echo=F,fig=F>>=\ntable.DownsideRisk(COM.09, ci = 0.95, digits = 4)\n@\n\\section{Non-i.i.d GSoC Usage}\n\\subsection{Auctocorrelation Adjusted Standard Deviation}\nGiven a sample of historical returns \\((R_1,R_2, . . .,R_T)\\),the method assumes the fund manager smooths returns in the following manner, when 't' is the unit time interval, with $\\rho$\\ as the respective term autocorrelation coefficient\n\n%Let $X \\sim N(0,1)$ and $Y \\sim \\textrm{Exponential}(\\mu)$. Let\n%$Z = \\sin(X)$. $\\sqrt{X}$.\n \n%$\\hat{\\mu}$ = $\\displaystyle\\frac{22}{7}$\n%e^{2 \\mu} = 1\n%\\begin{equation}\n%\\left(\\sum_{t=1}^{T} R_t/T\\right) = \\hat{\\mu} \\\\\n%\\end{equation}\n\\begin{equation}\n \\sigma_{T} = \\sqrt{ \\sum_k^n(\\sigma_{t}^2 + 2*\\rho_i) } \\\\\n\\end{equation}\n\n\n<<echo=F,fig=T>>=\nACFVol = ACStdDev.annualized(COM.09)\nVol = StdDev.annualized(COM.09)\nbarplot(rbind(ACFVol,Vol), main=\"ACF and Orignal Volatility\",\n xlab=\"Fund Type\",ylab=\"Volatilty (in %)\", col=rich6equal[2:3], beside=TRUE)\n legend(\"topright\", c(\"ACF\",\"Orignal\"), cex=0.6, \n bty=\"2\", fill=rich6equal[2:3]);\n@\n\nFrom the above figure, we can observe that all the funds, exhibit \\textbf{serial auto correlation}, which results in significantly \\emph{inflated} standard deviation.\n\\subsection{Andrew Lo Statistics of Sharpe Ratio}\n\nThe building blocks of the \\textbf{Sharpe Ratio} : expected returns and volatilities are unknown quantities that must be estimated statistically and are,\ntherefore, subject to \\emph{estimation error} .To address this question, Andrew Lo derives explicit expressions for the statistical distribution of the Sharpe ratio using\nstandard asymptotic theory. \n\nThe Sharpe ratio (SR) is simply the return per unit of risk (represented by variability). In the classic case, the unit of risk is the standard deviation of the returns.\n \n\\deqn{\\frac{\\overline{(R_{a}-R_{f})}}{\\sqrt{\\sigma_{(R_{a}-R_{f})}}}}\n\nThe relationship between SR and SR(q) is somewhat more involved for non-\nIID returns because the variance of Rt(q) is not just the sum of the variances of component returns but also includes all the co-variances. Specifically, under\nthe assumption that returns \\(R_t\\) are stationary,\n\\begin{equation}\nVar[(R_t)] = \\sum_{i=0}^{q-1} \\sum_{j=1}^{q-1} Cov(R(t-i),R(t-j)) = q\\hat{\\sigma^2} + 2\\hat{\\sigma^2} \\sum_{k=1}^{q-1} (q-k)\\rho_k \\\\\n\\end{equation}\n\nWhere $\\rho$\\(_k\\) = Cov(\\(R(t)\\),\\(R(t-k\\)))/Var[\\(R_t\\)] is the \\(k^{th}\\) order autocorrelation coefficient's of the series of returns.This yields the following relationship between SR and SR(q):\n\n\\begin{equation}\n\\hat{SR}(q) = \\eta(q) \\\\\n\\end{equation}\n\nWhere :\n\n\\begin{equation}\n\\eta(q) = \\frac{q}{\\sqrt{(q\\hat{\\sigma^2} + 2\\hat{\\sigma^2} \\sum_{k=1}^{q-1} (q-k)\\rho_k)}} \\\\\n\\end{equation}\n \nIn given commodity funds, we find results, similar reported in paper, that the annual Sharpe ratio for a hedge fund can be overstated by as much as \\textbf{65} \\% because of the presence of \\textbf{serial correlation}.We can observe that the fund \"\\textbf{DJUBS.Commodity}\", which has the largest drawdown and serial autocorrelation, has it's Andrew Lo Sharpe ratio , \\emph{decrease} most significantly as compared to other funds.\n\n<<echo=F,fig=T>>=\nLo.Sharpe = LoSharpe(COM.09)\nTheoretical.Sharpe= SharpeRatio.annualized(COM.09)\nbarplot(rbind(Theoretical.Sharpe,Lo.Sharpe), main=\"Sharpe Ratio Observed\",\n xlab=\"Fund Type\",ylab=\"Value\", col=rich6equal[2:3], beside=TRUE)\n legend(\"topright\", c(\"Orginal\",\"Lo\"), cex=0.6, \n bty=\"2\", fill=rich6equal[2:3]);\n@\n\\subsection{Conditional Drawdown}\nA new one-parameter family of risk measures called Conditional Drawdown (CDD) has\nbeen proposed. These measures of risk are functional of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of $\\hat{\\alpha}$ the tolerance parameter, in the case of a single sample path, drawdown functional is defined as the mean of the worst (1 \\(-\\) $\\hat{\\alpha}$)100\\% drawdowns. The CDD measure generalizes the notion of the drawdown functional to a multi-scenario case and can be considered as a generalization of deviation measure to a dynamic case. The CDD measure includes the Maximal Drawdown and Average Drawdown as its limiting cases.Similar to other cases, \\textbf{DJUBS.Commodity}, is the worst performing fund with worst case conditional drawdown greater than \\textbf{50\\%} and \\textbf{Newedge.CTI} performing significantly well among the peer commodity indices with less than \\textbf{15\\%}.\n\n<<echo=FALSE,fig=TRUE>>=\nc.draw=CDrawdown(COM.09)\ne.draw=ES(COM.09,.95,method=\"gaussian\")\nc.draw=100*as.matrix(c.draw)\ne.draw=100*as.matrix(e.draw)\nbarplot(rbind(-c.draw,-e.draw), main=\"Expected Loss in (%) \",\n xlab=\"Fund Type\",ylab=\"Value\", col=rich6equal[2:3], beside=TRUE)\n legend(\"topright\", c(\"Conditional Drawdown\",\"Expected Shortfall\"), cex=0.6, \n bty=\"2\", fill=rich6equal[2:3]);\n@\n\\subsection{Calmar and Sterling Ratio}\nBoth the Calmar and the Sterling ratio are the ratio of annualized return over the absolute value of the maximum drawdown of an investment.\n{equation}\n\\begin{equation}\n Calmar Ratio = \\frac{Return [0,T]}{max Drawdown [0,T]} \\\\\n\\end{equation}\n\n\\begin{equation}\n Sterling Ratio = \\frac{Return [0,T]}{max Drawdown [0,T] - 10\\%} \\\\\n\\end{equation}\n<<echo=T>>=\nround(CalmarRatio.Norm(COM.09,1),4)\nround(SterlingRatio.Norm(COM.09,1),4)\n@\nFor a 1 year \\emph{horizon} return, we can see that Newedge.CTI is the clear performer in this metric as well.However, a \\textbf{surprising} observed result, is negative \\emph{Sterling} and \\emph{Calmar} ratio for Morningstar.CLS . \n\\subsection{GLM Smooth Index}\nGLM Smooth Index is a useful parameter to quantify the degree of autocorrelation.It is a summary statistic for measuring the concentration of autocorrelation present in the lag factors (up-to 6) , which can be defined by the below equation as :\n\\begin{equation}\n\\xi = \\sum_{j=0}^{k} \\theta _j^2 \\\\\n\\end{equation}\n\nThis measure is well known in the industrial organization literature as the Herfindahl index, a measure of the concentration of firms in a given industry where $\\theta$\\(_j\\) represents the market share of firm j. Because $\\xi_t$\\ is confined to the unit interval, and is minimized when all the $\\theta$\\(_j\\) 's are identical, which implies a value of 1/k+1 for $\\xi_i$\\ ; and is maximized when one coefficient is 1 and the rest are 0. In the context of smoothed returns, a lower value of implies less smoothing, and the upper bound of 1 implies pure smoothing, hence we shall refer to $\\theta$\\(_j\\) as a \\textbf{smoothing index}.\n\n<<echo=FALSE,fig=TRUE>>=\nlibrary(noniid.sm)\nsource(\"C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/GLMSmoothIndex.R\")\nGLM.index=GLMSmoothIndex(COM.09)\nbarplot(as.matrix(GLM.index), main=\"GLM Smooth Index\",\n xlab=\"Fund Type\",ylab=\"Value\",colorset = rich6equal[1], beside=TRUE)\n@\n\nFor the given chart, we can observe that \\textbf{all the funds} have significant level of smooth returns.\n\\subsection{Acar Shane Maximum Loss}\n\nMeasuring risk through extreme losses is a very appealing idea. This is indeed how financial companies perceive risks. This explains the popularity of loss statistics such as the maximum drawdown and maximum loss. An empirical application to fund managers performance show that \\textbf{very few investments} exhibit \\emph{abnormally high or low drawdowns}. Consequently, it is doubtful that drawdowns statistics can be used \nto significantly distinguish fund managers. This is confirmed by the fact that predicting one-period ahead drawdown is an almost impossible task. Errors average at the very best 27\\% of the true value observed in the market.\n\nThe main concern of this paper is the study of alternative risk measures: namely maximum loss and maximum drawdown. Unfortunately, there is no analytical formula to establish the maximum drawdown properties under the random walk assumption. We should note first that due to its definition, the maximum drawdown divided by volatility is an only function of the ratio mean divided by volatility.\n\n\n\\begin{equation}\nMD / \\sigma = Min \\frac{ \\sum_{j=1}^{t} X_{j}}{\\sigma} = F(\\frac{\\mu}{\\sigma}) \\\\\n\\end{equation}\n\nSuch a ratio is useful in that this is a complementary statistic to the return divided by volatility ratio. To get some insight on the relationships between maximum drawdown per unit of volatility and mean return divided by volatility, we have proceeded to Monte-Carlo simulations. We have simulated cash flows over a period of 36 monthly returns and measured maximum drawdown for varied levels of annualized return divided by volatility varying from minus two to two by step of 0.1. The process has been repeated six thousand times.\n\nFor instance, an investment exhibiting an annualized return/volatility equal to -2 \nshould experience on average a maximum drawdown equal to six times the annualized volatility. \n\nOther observations are that: \n\\begin{itemize}\n\\item maximum drawdown is a positive function of the return/volatility ratio \n\\item confidence interval widens as the return/volatility ratio decreases \n\\end{itemize}\n\nThis means that as the return/volatility increases not only the magnitude of drawdown decreases but the confidence interval as well. In others words losses are both smaller and more predictable.\n\n<<echo=FALSE,fig=TRUE>>=\nsource(\"C:/Users/shubhankit/Desktop/Again/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/R/AcarSim.R\")\nAcarSim(COM.09)\n@\n\nAs we can see from the \\emph{simulated chart}, DJUBS.Commodity comes at the bottom , which imply a \\emph{lower} \\textbf{return-maximum loss} ratio.\n\n<<echo=FALSE,fig=TRUE>>=\nlibrary(noniid.sm)\nchart.Autocorrelation(COM.09)\n@\n\nFinally, from the autocorrelation lag plot, one can observe, significant \\textbf{positive} autocorrelation for \\textbf{Newedge.CTI}, which is a \\emph{warning} signal in case drawdown occurs, in an otherwise excellent performing fund.\n\\section{Conclusion}\n\nAnalyzing all the function results, one can clearly differentiate \\textbf{Newedge.CTI}, as a far superior fund as compared to it's peer.\\textbf{MorningStar.CLS}, exhibits highest autocorrelation as well as lowest Calmar/Sterling ratio, but compared on other front, it distinctly outperforms \\textbf{DJUBS.Commodity}, which has performed poorly on all the tests. \n\nThe above figure shows the characteristic of the respective fund performance, which is after the period of analysis till \\textbf{\"July-2013\"}.At this moment, we would like the readers, to use the functions developed in the R \\textbf{\"PerformanceAnalytics\"} package, to study ,use it for analysis as well as for forming their own opinion. \n\n<<echo=F,fig=T>>=\ncharts.PerformanceSummary(COM.09[109:151],colorset = rich6equal, lwd = 2, ylog = TRUE)\n@\n\n\n\\end{document}",
- "created" : 1379111210609.000,
- "dirty" : false,
- "encoding" : "UTF-8",
- "folds" : "",
- "hash" : "3404124299",
- "id" : "445E439C",
- "lastKnownWriteTime" : 1378859979,
- "path" : "C:/Users/shubhankit/Desktop/1 week/pkg/PerformanceAnalytics/sandbox/Shubhankit/sandbox/Commodity.Rnw",
- "properties" : {
- "ignored_words" : "drawdown,autocorrelation,Newedge,MorningStar,Calmar,PerformanceAnalytics,url,eps,Shubhankit,Mohan,Morningstar,Drawdown,Stdev,Skewness,skewness,GSoC,Auctocorrelation,volatilities,Cov,th,drawdowns,multi,Herfindahl,Acar,analytical\n",
- "tempName" : "Untitled1"
- },
- "source_on_save" : false,
- "type" : "sweave"
-}
\ No newline at end of file
Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/44B07808
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/44B07808 2013-09-20 17:36:42 UTC (rev 3149)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/44B07808 2013-09-20 18:20:05 UTC (rev 3150)
@@ -1,15 +0,0 @@
-{
- "contents" : "\\name{lmi}\n\\alias{lmi}\n\\title{Fitting Generalized Linear Models with HC and HAC Covariance Matrix Estimators}\n\\usage{\n lmi(formula, data, vcov = NULL, subset, weights,\n na.action, method = \"qr\", model = TRUE, x = FALSE,\n y = FALSE, qr = TRUE, singular.ok = TRUE,\n contrasts = NULL, offset, ...)\n}\n\\arguments{\n \\item{formula}{an object of class \"formula\" (or one that\n can be coerced to that class): a symbolic description of\n the model to be fitted. The details of model\n specification are given under ‘Details’.}\n\n \\item{data}{an optional data frame, list or environment\n (or object coercible by as.data.frame to a data frame)\n containing the variables in the model. If not found in\n data, the variables are taken from environment(formula),\n typically the environment from which lm is called.}\n\n \\item{vcov}{HC-HAC covariance estimation}\n\n \\item{weights}{an optional vector of weights to be used\n in the fitting process. Should be NULL or a numeric\n vector. If non-NULL, weighted least squares is used with\n weights weights (that is, minimizing sum(w*e^2));\n otherwise ordinary least squares is used. See also\n ‘Details’,}\n\n \\item{subset}{an optional vector specifying a subset of\n observations to be used in the fitting process.}\n\n \\item{na.action}{a function which indicates what should\n happen when the data contain NAs. The default is set by\n the na.action setting of options, and is na.fail if that\n is unset. The ‘factory-fresh’ default is na.omit.\n Another possible value is NULL, no action. Value\n na.exclude can be useful.}\n\n \\item{method}{the method to be used; for fitting,\n currently only method = \"qr\" is supported; method =\n \"model.frame\" returns the model frame (the same as with\n model = TRUE, see below).}\n\n \\item{model}{logicals. If TRUE the corresponding\n components of the fit (the model frame, the model matrix,\n the response, the QR decomposition) are returned.}\n\n \\item{x}{logicals. If TRUE the corresponding components\n of the fit (the model frame, the model matrix, the\n response, the QR decomposition) are returned.}\n\n \\item{y}{logicals. If TRUE the corresponding components\n of the fit (the model frame, the model matrix, the\n response, the QR decomposition) are returned.}\n\n \\item{qr}{logicals. If TRUE the corresponding components\n of the fit (the model frame, the model matrix, the\n response, the QR decomposition) are returned.}\n\n \\item{singular.ok}{logical. If FALSE (the default in S\n but not in R) a singular fit is an error.}\n\n \\item{contrasts}{an optional list. See the contrasts.arg\n of model.matrix.default.}\n\n \\item{offset}{this can be used to specify an a priori\n known component to be included in the linear predictor\n during fitting. This should be NULL or a numeric vector\n of length equal to the number of cases. One or more\n offset terms can be included in the formula instead or as\n well, and if more than one are specified their sum is\n used. See model.offset.}\n\n \\item{\\dots}{additional arguments to be passed to the low\n level regression fitting functions (see below).}\n}\n\\description{\n lm is used to fit generalized linear models, specified by\n giving a symbolic description of the linear predictor and\n a description of the error distribution.\n}\n\\details{\n see \\code{\\link{lm}}.\n}\n\\author{\n The original R implementation of glm was written by Simon\n Davies working for Ross Ihaka at the University of\n Auckland, but has since been extensively re-written by\n members of the R Core team. The design was inspired by\n the S function of the same name described in Hastie &\n Pregibon (1992).\n}\n\\keyword{covariance}\n\\keyword{estimation}\n\\keyword{fitting}\n\\keyword{HAC}\n\\keyword{HC}\n\\keyword{model}\n\\keyword{regression}\n\n",
- "created" : 1379108371760.000,
- "dirty" : false,
- "encoding" : "UTF-8",
- "folds" : "",
- "hash" : "1851514728",
- "id" : "44B07808",
- "lastKnownWriteTime" : 1379111172,
- "path" : "C:/Users/shubhankit/Desktop/1 week/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/man/lmi.Rd",
- "properties" : {
- },
- "source_on_save" : false,
- "type" : "r_doc"
-}
\ No newline at end of file
Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/58E583C6
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/58E583C6 2013-09-20 17:36:42 UTC (rev 3149)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/58E583C6 2013-09-20 18:20:05 UTC (rev 3150)
@@ -1,15 +0,0 @@
-{
- "contents" : "* Edit the help file skeletons in 'man', possibly combining help files for multiple\n functions.\n* Edit the exports in 'NAMESPACE', and add necessary imports.\n* Put any C/C++/Fortran code in 'src'.\n* If you have compiled code, add a useDynLib() directive to 'NAMESPACE'.\n* Run R CMD build to build the package tarball.\n* Run R CMD check to check the package tarball.\n\nRead \"Writing R Extensions\" for more information.\n",
- "created" : 1379111136257.000,
- "dirty" : false,
- "encoding" : "UTF-8",
- "folds" : "",
- "hash" : "3579872522",
- "id" : "58E583C6",
- "lastKnownWriteTime" : 1378551041,
- "path" : "C:/Users/shubhankit/Desktop/1 week/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/Read-and-delete-me",
- "properties" : {
- },
- "source_on_save" : false,
- "type" : "text"
-}
\ No newline at end of file
Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/7D095D73
===================================================================
--- pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/7D095D73 2013-09-20 17:36:42 UTC (rev 3149)
+++ pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/7D095D73 2013-09-20 18:20:05 UTC (rev 3150)
@@ -1,15 +0,0 @@
-{
- "contents" : "Package: noniid.sm\nType: Package\nTitle: Non-i.i.d. GSoC 2013 Shubhankit\nVersion: 0.1\nDate: $Date: 2013-05-13 14:30:22 -0500 (Mon, 13 May 2013) $\nAuthor: Shubhankit Mohan <shubhankit1 at gmail.com>\nContributors: Peter Carl, Brian G. Peterson\nDepends:\n xts,\n PerformanceAnalytics,\n tseries,\n stats\nMaintainer: Brian G. Peterson <brian at braverock.com>\nDescription: GSoC 2013 project to replicate literature on drawdowns and\n non-i.i.d assumptions in finance.\nLicense: GPL-3\nByteCompile: TRUE\nCollate:\n 'ACStdDev.annualized.R'\n 'CalmarRatio.Norm.R'\n 'CDrawdown.R'\n 'chart.AcarSim.R'\n 'chart.Autocorrelation.R'\n 'EmaxDDGBM.R'\n 'GLMSmoothIndex.R'\n 'na.skip.R'\n 'noniid.sm-internal.R'\n 'QP.Norm.R'\n 'Return.GLM.R'\n 'Return.Okunev.R'\n 'SterlingRatio.Norm.R'\n 'table.ComparitiveReturn.GLM.R'\n 'table.EMaxDDGBM.R'\n 'table.UnsmoothReturn.R'\n 'UnsmoothReturn.R'\n 'LoSharpe.R'\n 'se.LoSharpe.R'\n 'table.Sharpe.R'\n 'glmi.R'\n 'lmi.R'\n",
- "created" : 1379107778236.000,
- "dirty" : false,
- "encoding" : "UTF-8",
- "folds" : "",
- "hash" : "677145396",
- "id" : "7D095D73",
- "lastKnownWriteTime" : 1379111172,
- "path" : "C:/Users/shubhankit/Desktop/1 week/pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/DESCRIPTION",
- "properties" : {
- },
- "source_on_save" : false,
- "type" : "dcf"
-}
\ No newline at end of file
Deleted: pkg/PerformanceAnalytics/sandbox/Shubhankit/noniid.sm/.Rproj.user/E5D7D248/sdb/per/t/934ACCDE
[TRUNCATED]
To get the complete diff run:
svnlook diff /svnroot/returnanalytics -r 3150
More information about the Returnanalytics-commits
mailing list